“When I tell it how to process these, then it updates this file, so it knows how to process them the next time. All I need to do was copy all of our transcripts and drag and drop them into that transcripts folder. Then I went into Claude Code. I changed my directory to the vault and said: follow the instructions in schema.md. That’s it.” — Lou
Session context: 2026-04-09_Mastermind — Lou demonstrated the fully operational LKB (Living Knowledge Base) vault, walking the group through the actual folder structure, outputs, and workflow that processes mastermind transcripts into a navigable, cross-linked intelligence graph.
Core Idea
The Living Knowledge Base concept has been discussed in prior sessions as an architectural vision. This session was the first live demonstration of the implementation working end-to-end — not a prototype, but a fully operational system processing a year of mastermind transcripts into a queryable, AI-navigable knowledge graph.
What the system actually produces from each transcript:
- Session recap — 500-word summary of the session, every topic covered
- Insight pages — standalone deep-dives on high-value moments, with cross-links
- Commands — extracted reusable prompt patterns filed in a commands library
- Skills — multi-step workflows inferred from conversation, filed in a skills library
- Article briefs — scored proposals for which insights warrant published articles
- Entity profiles — for participants who contribute 3+ times, a profile of their interests, contributions, and relevant insights
- Voice of Customer library — direct quotes from participants organized by theme (AI overwhelm, implementation paralysis, tool fragmentation, knowledge disorganization, voice and authenticity)
The entire system runs from one instruction file (schema.md — functionally equivalent to a skill.md). Claude reads a new transcript, follows the schema, and files everything where it belongs. The schema defines the folder structure, the output formats, the cross-linking rules, and the criteria for what qualifies as an insight vs. a command vs. a skill.
Why this mimics hybrid RAG without embedding infrastructure: Lou explained that the wiki structure — where every insight is a short document, every recap links to insights, every insight cross-links to related insights — behaves like the “hybrid document encoding” approach in RAG. Claude can find the most relevant 100-line insight file, follow its links to related insights, and build a rich, contextually grounded response without needing vector embeddings, a database, or an API. The links are the navigation layer; the summaries are the index; the cross-references are the graph.
The voice of customer application: One of the most practically valuable outputs was the VoC library. Lou noted: “I have quotes from all of you guys — AI overwhelm, implementation gap and paralysis, tool fragmentation, losing thinking, knowledge disorganization, voice and authenticity. These are all what you guys are telling me you’re dealing with.” This becomes direct input into GEARS schema, article topic selection, and client-facing content — sourced from real member language, not assumed pain points.
The content pipeline integration: Lou demonstrated how article briefs from the LKB feed into the Brand Writing Team skill: “I take this brief, and I give it to my writing team. And I’ve got articles coming out.” The LKB is not an archive — it’s the front end of an active content production system. Every session generates candidate articles. Every candidate article gets scored. The best ones enter the writing queue automatically.
The setup was simpler than expected: The whole system was built by starting with Andrej Karpathy’s published wiki spec, customizing it for the mastermind context (commands, skills, insights, article briefs), asking Claude to create the folder structure, dragging transcripts into the input folder, and saying “follow the instructions in schema.md.” No database, no embedding, no infrastructure beyond a local folder on disk.
Practical Application
The minimum viable LKB (start here, expand from there):
- Write a schema.md (or skill.md) that answers: what type of files will you feed in? What outputs do you want? Where should they be stored?
- Create the folder structure — ask Claude to create the empty folders matching your schema.
- Drop in your first batch of source files (transcripts, notes, documents, whatever you’re working with)
- Run Claude Code in that directory and say: “Follow the instructions in schema.md.”
- Open the output in Obsidian (or just browse in Finder) and see what emerged.
The schema is the only thing you need to get right. Everything else is drag-and-drop plus one command.
What to put in the schema: At minimum: input sources and their location, output types and their location, what constitutes each output type (e.g., “an insight is any idea that spent 5+ minutes in discussion and has a standalone application”), and cross-linking rules.
Related Insights
- Insight - Ambient Intelligence — Build a Skill in Every Folder to Make Your Entire Knowledge Base Alive — the architectural vision this insight implements; this is what “ambient intelligence” looks like in a fully operational form
- Insight - Your Knowledge Is the Database, AI Is the Interface — the foundational principle demonstrated live
- Insight - Two-Tier Content Architecture — AI-Legible Stubs and Human-Facing Articles — the insight/recap structure in the LKB is a practical implementation of two-tier architecture
- Insight - Turn Every Conversation Into a Content Engine With AI Synthesis — the article brief → writing team pipeline demonstrated here
- Insight - Your AI Conversation History Is a Knowledge Asset Worth Mining — the LKB is the infrastructure for doing this at scale
Evolution Across Sessions
This builds on Insight - Ambient Intelligence — Build a Skill in Every Folder to Make Your Entire Knowledge Base Alive (2026-04-02), which established the vision. The new development here is the live demonstration: what does it actually look like when it works? The session revealed key implementation details not visible in the vision (schema as the single control file, the VoC library as a valuable byproduct, the article brief pipeline integration, the entity tracking system). Future sessions should track whether members successfully set up their own LKBs using the schema approach and whether the outputs meet their quality expectations.