“Run the sub-task in a forked context — give it only what it needs, bring back only the result. The parent conversation stays clean. The sub-agent works with full focus.” — Lou

Session context: 2026-01-15_Mastermind — Lou introduced this pattern from his own Claude Code usage, in the context of discussing how to handle heavy analytical sub-tasks without degrading the quality of a long-running working conversation.

Core Idea

In long Claude conversations, context accumulates and degrades reasoning quality. The model carries all prior conversational history, which means every new task is influenced by the patterns, constraints, and framings established early in the conversation — even when the current task requires fresh, uncontaminated thinking. If you run a major analytical sub-task inside a long working conversation, the sub-task inherits all the context pollution from everything that came before it.

The forked skill pattern solves this by isolating the sub-task in a completely separate Claude instance. You launch a new conversation (or a Claude Code sub-agent) with only the information that sub-task needs — nothing from the parent conversation’s history. The sub-task runs to completion in a clean context. You bring back only the summary result to the parent conversation. The parent stays coherent. The sub-agent works at full quality.

This is the difference between a monolithic conversation where everything bleeds into everything else, and a composable workflow where each component operates in clean isolation. The architectural principle mirrors good software design: components should be loosely coupled, passing only what’s necessary at boundaries. The forked skill is the AI equivalent of a well-designed function call — isolated input, isolated processing, clean output handed back to the caller.

The practical applications are numerous: transcript extraction (run fresh on the transcript without conversational baggage), insight scoring (run without prior scores contaminating the frame), document drafting (start clean without earlier draft iterations pulling the style), research synthesis (run without the framing of whatever question you asked first).

Practical Application

When to fork: Whenever you’re about to start a task that would benefit from fresh reasoning — a task where you’d want an analyst who hadn’t heard any of the prior conversation. The test: if you’d want a fresh person for this task, fork it.

How to fork in Claude Code: Use the claude --print command with the --context flag, or simply open a new Claude Code session with a clean system prompt for the sub-task. Pass only the specific files and context the sub-task needs. The output goes to a file; the parent conversation reads only the summary from that file.

The handoff protocol: When bringing results back to the parent conversation, summarise rather than paste in full. The parent needs the conclusions and any decisions that follow from them — not the full working of the sub-task. Keep the handoff clean.

Evolution Across Sessions

This establishes the baseline for context isolation architecture in Claude workflows within the vault. The problem of context pollution has been discussed since the October sessions, and the skill chaining insight established modularity as a design principle. This insight adds the specific mechanism — forking — that makes clean context isolation practical in long-running AI work sessions. Future sessions should document specific workflows where forking produces measurably better results than in-context processing.