Original Insight
“The key differentiators are: think about commands as prompts… agents have their own context and you’re going to have to do a little handshaking with them… the skill is going to have tools and resources and assets — it’s more deterministic than an agent, usually less intelligence, just performs the task… Design it for maximum composability, minimum noise in the context, and maximum asynchronous and parallel function.” — Lou
Expanded Synthesis
The way most people build AI-assisted workflows today is the equivalent of hiring a team of world-class specialists and then insisting they all sit in the same meeting for every task. The room fills with noise, attention fragments, and the throughput of each person collapses. Good AI system design looks almost nothing like this.
Lou’s March 12 explanation of the three-layer Claude architecture — commands, agents, and skills — is one of the most practically useful frameworks to emerge in these sessions. It’s worth unpacking fully, because it applies not just to software builders but to anyone designing knowledge workflows, coaching systems, or automated client processes.
Commands are the entry point — short, focused prompts that function like shortcuts in your personal operating system. Think of them as task invocations: 100-500 words that tell Claude what you want to do right now. They’re not meant to carry logic; they delegate to agents or skills for actual execution.
Skills are bounded, deterministic task engines. A skill has access to tools (MCPs, file systems, APIs), performs a specific task, and inherits your conversation context. The trade-off: because it runs in your context, everything it does — including the intermediate steps, brainstorming, and false starts — is visible in your session and consumes context window. Skills are the right choice when you want the reasoning to inform the rest of your conversation.
Agents are autonomous workers with their own virtual context. When an agent runs, it doesn’t pollute your main conversation. It spins up, does its work (which may involve hundreds of intermediate steps, comparisons, and discards), and returns a clean result. This is the design pattern for any task that involves significant exploration — brainstorming, competitive analysis, deep research, draft generation — where you only care about the output, not the process.
The architectural principle Lou articulated is powerful and generalizable far beyond AI: match the scope of a tool’s visibility to the scope of its relevance. If you don’t need to see the intermediate reasoning, don’t put it in your context. Agents are exactly this: they do the messy work in a clean room and hand you the result.
For coaches and knowledge workers, this framework resolves a common frustration: AI sessions that start clean and become unwieldy as context accumulates. The solution isn’t shorter conversations — it’s better architecture. You run the exploratory, high-noise tasks as agents; you run the synthesis and application tasks in-context as skills; and you invoke everything through short, clean commands.
There’s a composability bonus here too. Because agents inherit your command and skill ecosystem, you can chain them. A command triggers an agent that uses a skill that writes to Obsidian. The handshaking between layers is explicit (structured data passed in, structured result returned), which makes the whole system debuggable, improvable, and eventually teachable to others.
For PowerUp clients, the immediate application is less about building Claude Code systems and more about internalizing the underlying principle: cognitive overhead is a design problem, not a willpower problem. If your workflow constantly demands that you hold too many things in mind at once, the solution isn’t to get better at multitasking — it’s to redesign so that each piece of the system only sees what it needs to see. The agent/skill/command architecture is one implementation of this. Personal operating systems, client intake workflows, and even coaching session structures can all be designed with the same discipline.
Practical Application for PowerUp Clients
The Workflow Decomposition Exercise
Take any complex, recurring task you do (client onboarding, content production, weekly planning) and decompose it using this framework:
- Commands layer — What’s the trigger? What’s the 1-3 sentence instruction that kicks off the process?
- Agent candidates — Which steps in the process involve significant exploration, comparison, or discarding? These should run independently and return just the clean output.
- Skill candidates — Which steps need to reference context from the rest of the workflow? These run in-context.
- Context discipline — At the end of the workflow, what actually needs to persist? What can be archived? Design for minimum residue.
Coaching Questions
- Where in your work are you forcing yourself to hold too many things in mind simultaneously?
- Which parts of your workflow produce a lot of noise before they reach a useful output — and could that noise be hidden from you?
- If you designed your ideal client engagement process from scratch today, what would be a command, what would be an agent, and what would be a skill?
Journal Prompt What decision or task am I regularly doing in public (in full view of everyone involved) that would actually be better done quietly, with only the result shared?
Additional Resources
- Claude Code documentation on agents, skills, and commands (Anthropic developer docs)
- A Philosophy of Software Design by John Ousterhout — on reducing cognitive complexity through good architecture
- Insight - Build Tiny Tools That Remove Real Friction — tools should reduce friction, not add cognitive overhead
- Insight - Codify Your Judgment Into Skills, Not Just Prompts — skills work best when they encode clear judgment
Evolution Across Sessions
The April 2 session introduced the idea of building tiny tools that remove real friction. This March 12 insight is the architectural layer underneath that: not just what tools to build, but how to architect them so they compose cleanly. The April 2 session described individual tools; this session describes the orchestration layer. Together, they form the foundation of a serious personal AI operating system.
Next Actions
- For me (Lou): Refactor the GEARS skill library using the agent/skill/command distinction Lou described — move high-noise brainstorming tasks to agents and keep synthesis tasks as skills.
- For clients: When building any AI-assisted workflow, add a “context discipline” review step: before finalizing, ask — what in this workflow is adding noise I don’t need? Remove or encapsulate it.