“As long as it can access those files, we should be okay.” — Lou

Session context: 2026-04-23_Mastermind — Lou articulated the architectural philosophy behind his local-first stack, clarifying the distinction between platforms as interfaces (which you use) and platforms as custodians (which own your work).

Core Idea

The most important architecture decision you can make right now isn’t which AI to use — it’s where your knowledge lives and who controls it.

The core risk is systematic and underappreciated. Chat history lives in the chat platform. Artifacts from Claude web live in Claude’s virtual sandbox — and when the session ends, that sandbox is gone. Memory built up in one tool doesn’t transfer to another. When the platform changes its policies, hits capacity constraints, or simply sunset a feature, the intelligence you built inside it disappears or becomes inaccessible. You experience this as lost work. It’s actually an architectural vulnerability.

The resolver pattern solves this at the design level. CLAUDE.md functions as a resolver — not as a knowledge store, but as a pointer file. It contains instructions like: “When you want to do X, check that file on disk. When you want to understand my approach to Y, read this folder.” The intelligence travels with the files, not with the platform. If Claude becomes the wrong tool, you redirect another AI to the same files. Your SOPs, skills, reference guides, and frameworks are platform-agnostic by design.

The practical architecture Lou has landed on:

  • Local disk as the single source of truth for everything that matters
  • Claude Code as the engine for anything involving files you want to maintain over time
  • Obsidian as the viewer and navigation layer (graph view, search, wikilinks)
  • Claude Chat / Cowork for brainstorming and quick artifacts (these are disposable by design)
  • CLAUDE.md as the resolver that makes any AI interface productive in this folder

The right question to ask before any AI interaction: Can I save this output? If the answer is no, don’t build something in that session that you’ll need later. Either move to a context where you can save, or deliberately treat the session as exploration rather than construction.

Practical Application

The minimum viable version requires no infrastructure. Pick one folder on your computer to be your AI knowledge home. Every time you have a meaningful AI conversation — a framework you worked through, a decision you made, research that moved something forward — export the key output as a plain text or markdown file with a date in the filename. In three months, that folder is searchable, portable, and completely independent of any platform.

Write a one-page CLAUDE.md in that folder. It needs three things: (1) what’s in the folder and how it’s organized, (2) which files to consult for which kinds of tasks, (3) your preferences and constraints for working in this context. That’s the resolver. Any AI that can read files becomes productive in your knowledge environment immediately.

Evolution Across Sessions

This builds on Insight - Your Private AI Stack — Own Your Data Without Building From Scratch (2025-08-28) and Insight - The Living Knowledge Base in Action — From Transcript to Intelligence Graph (2026-04-09), which established the vision and the implementation. The new development is the resolver pattern as a named design principle: CLAUDE.md isn’t a knowledge store, it’s a pointer file. This reframing makes the architecture platform-agnostic by design — not just portable as a side effect, but built for portability as a first-order principle. The key question (“Can I save this output?”) provides a practical decision rule for every AI interaction.