Topic
The architectural shift from using platforms as custodians of your AI intelligence to using them as interfaces to intelligence that lives on your local disk — implemented through the resolver pattern (CLAUDE.md as a pointer file).
Target Reader
Knowledge entrepreneurs, coaches, and consultants who have been using Claude, ChatGPT, or Notion AI seriously for 6–18 months. They have workflows, habits, maybe even custom GPTs or Claude Projects. They’ve never questioned whether their AI intelligence is portable. They haven’t thought about what happens if the platform changes, raises prices, or sunsets a feature.
The Fear / Frustration / Want / Aspiration
Fear (latent): All the work I’ve done building AI workflows is owned by the platform, not me. Frustration: When I try to use what I built in one AI in another context, I can’t. Want: Portability — knowing that if I switch platforms or if a platform breaks, my knowledge and workflows aren’t lost. Aspiration: A personal AI system that’s genuinely mine, independent of any vendor’s infrastructure decisions.
Before State
They’ve built meaningful knowledge and workflows inside AI platforms. They assume this is fine — the platform is reliable, subscription is reasonable. They don’t know they’re building on rented land. When they try to use knowledge from one AI session in another context, they discover it doesn’t transfer. Each time they explore a new AI tool, they realize they’d have to rebuild everything from scratch.
After State
They have a folder on their disk that is their AI knowledge home. A CLAUDE.md resolver in that folder points any capable AI to the right resources. They ask “Can I save this output?” before any meaningful AI session. When a platform surprises them (pricing, policies, feature changes), they can redirect to another tool without losing their work. Their knowledge compounds on their timeline, not a platform’s roadmap.
Narrative Arc
Chat history lives in the chat platform. Artifacts live in a virtual sandbox. Memory built up in one tool doesn’t transfer to another. Every platform switch resets the intelligence you thought you owned. The resolver pattern inverts this: make the platform the interface, the disk the custodian. CLAUDE.md contains pointers, not intelligence — and any AI that can read files becomes immediately productive in your knowledge environment. The minimum viable version is one folder and one markdown file. Everything else is an upgrade.
Core Argument
Most AI workflows are architecturally fragile because the intelligence lives in the platform rather than in portable files — and one pricing change, one sunset, one capacity crunch is all it takes to realize you’ve been renting, not owning.
Key Evidence / Examples
-
“As long as it can access those files, we should be okay.” — Lou (April 23, 2026 session)
- Anthropic’s compute crunch as a concrete example of platform fragility: throttled API, reduced limits, tokenizer change — none announced publicly
- The “Can I save this output?” question as the practical test for every AI interaction
- Chat → Cowork → Claude Code as a three-surface architecture for different types of work (disposable vs. persistent)
- Insight - The Living Knowledge Base in Action — From Transcript to Intelligence Graph — the full implementation of this principle at vault scale
Proposed Structure (5–7 beats)
- The invisible problem — where is your AI knowledge right now? Can you move it?
- The platform fragility story — Anthropic’s compute crunch as the concrete case study; silent tokenizer change; capacity throttling
- The resolver pattern explained — CLAUDE.md as pointer file, not knowledge store; intelligence travels with files, not with sessions
- The three surfaces — Chat (exploratory/disposable), Cowork (local output), Claude Code (persistent build); which work goes where
- The minimum viable version — one folder, one markdown file, date-stamped exports; ready in an afternoon
- The decision rule — “Can I save this output?” before every meaningful session
- What becomes possible — platform independence; portability between AI tools; compounding intelligence that isn’t reset by vendor decisions
Related Insights
- Insight - Platform as Interface, Not Custodian — The Resolver Pattern for Portable AI Intelligence
- Insight - The Living Knowledge Base in Action — From Transcript to Intelligence Graph
- Insight - Convert Compute Into Intellectual Property Through a Compounding Knowledge Vault
- Insight - Your Private AI Stack — Own Your Data Without Building From Scratch
Editorial Notes
The Anthropic compute crunch section is valuable as concrete grounding but shouldn’t dominate. It’s the “this happens to real companies” evidence, not the article’s core. The resolver pattern and the minimum viable version are the actionable heart. Don’t let this become a technical architecture article — keep it accessible, practical, and urgent for the knowledge entrepreneur who hasn’t thought about this yet.
Next Step
- Approved for drafting
- Needs revision
- Deprioritised