“The code is the chassis. The prompts are the car.” — Extracted from TrelloAgents build session
Source context: TrelloAgents architecture pattern — where each of six pipeline agents is defined by a single .md file in agents/. The entire pipeline behavior (output format, evaluation criteria, persona, domain knowledge, review standards) is encoded in those six files. Extracted 2026-04-27 as a teaching pattern from the build.
Core Idea
Most AI workflows entangle behavior and infrastructure. The LLM call, the output format, the criteria for success, the persona, the error handling — all of it lives inside code. Changing how an agent thinks means opening the code, finding the right string, redeploying. This is the wrong architecture for systems that need to evolve.
The prompt-as-configuration pattern separates the behavior layer (what the agent thinks, decides, and produces) from the infrastructure layer (how the pipeline runs, how state is managed, how outputs are routed). The behavior layer is encoded in plain text files — markdown, typically. The infrastructure layer is code. The two never mix.
In TrelloAgents, this means:
- Six
.mdfiles inagents/define what each agent does — their role, output format, evaluation rubric, persona, domain knowledge - Python scripts, Claude Code orchestration, Trello API calls, file handling — none of it changes when you change agent behavior
- Adapting the entire pipeline for a new use case (content marketing, product development, hiring process) means editing six markdown files, not touching infrastructure
The architectural unlock: the person who owns the workflow doesn’t need to touch the code. The behavior layer — the prompts — is readable, editable, and improvable by anyone who understands the domain. The infrastructure layer is generic and reusable. The practitioner owns the intelligence; the code just runs it.
What “Just Text” Actually Means
The claim that behavior is “just text” understates how much power that text carries. A well-crafted agent prompt encodes:
- Role and persona — what this agent’s job is and how it approaches that job
- Output format — exactly what the artifact should look like when the agent is done
- Evaluation criteria — what “good” means for this stage’s output (the Spec Reviewer’s rubric, the Final Reviewer’s approval threshold)
- Escalation logic — when to bounce back vs. approve, how to communicate the reason
- Domain knowledge — the context the agent needs to do its job without hallucinating
All of this is invisible in a traditional code-based agent because it’s either scattered across the codebase or implied by variable names. When it’s in a .md file, it’s legible. You can read it, critique it, improve it, version it, share it, and teach it.
The Maintainability Difference
An AI workflow built with entangled behavior and infrastructure has two failure modes:
- The behavior needs to change (the Spec Reviewer needs a stricter rubric) — requires a developer to find and edit the embedded prompt, understand the surrounding code well enough not to break it, redeploy
- The infrastructure needs to change (switching from Trello to Linear) — requires hunting through code for all the behavior strings, extracting them, re-embedding them in the new infrastructure
A workflow built with separated layers has only one failure mode at a time. Change the behavior: edit the .md files. Change the infrastructure: rewrite the code, the .md files are untouched. These are separate concerns and they should fail — and improve — independently.
This is the difference between a one-time AI project and a maintainable AI system.
Practical Application
The three-minute architecture check:
Before building any multi-stage AI workflow, ask:
- Where does agent behavior live? (If the answer is “in the code,” it’s entangled — extract it)
- Can someone who doesn’t write code edit the agent’s behavior? (If no, the behavior layer isn’t actually separated)
- If you needed to swap the pipeline infrastructure tomorrow (different state store, different orchestrator, different model), would you have to rewrite the agent behaviors? (If yes, they’re still entangled)
Immediate action: For an existing workflow, extract every embedded prompt string into a named .md file. Name the file after the role the agent plays. Reference the file from the code rather than embedding the string. This one refactor makes the behavior layer visible, editable, and separable.
Related Insights
- Insight - Codify Your Judgment Into Skills, Not Just Prompts — The judgment codification principle that prompt-as-configuration operationalizes: your expertise is in the
.mdfiles, not the code - Insight - Use the LLM as the UI — Conversation as Interface for Internal Tools — The complementary architectural principle: LLM handles the interface layer (conversation), code handles execution, prompts configure the LLM’s behavior. Three clean layers.
- Insight - Separation of Concerns in Skills — One File, One Job — The same one-file-one-job discipline applied here at the agent level: one
.mdfile per agent role, no behavior leaking into infrastructure - Insight - Prevent AI Drift by Treating System Prompts as Living Constraints — The governance implication of this pattern: when behavior is in files, drift is detectable and correctable; when it’s embedded in code, drift is invisible
- Insight - Process Architecture Transmits Judgment More Reliably Than Individual Prompts — The meta-principle that this pattern instantiates: a well-structured process (infrastructure) carrying well-written prompts (behavior) outperforms a single clever prompt doing everything
Evolution Across Sessions
This establishes the baseline for the prompt-as-configuration architecture pattern. The TrelloAgents pipeline is the cleanest live demonstration of this principle: a six-stage multi-agent pipeline where all behavior is in .md files and the infrastructure is entirely generic. Future sessions should track whether practitioners successfully apply this separation to existing entangled workflows — and whether the “chassis and car” framing makes the architectural distinction legible to non-technical clients.