PowerUp AI Mastermind — April 9, 2026

“Key thing to remember: it’s just a folder, with a skill.md file.” — Lou


This Week in 30 Seconds

  • Lou’s week in build mode — Trello orchestration skill generalized, Ambient Intelligence Bus architecture advanced, LKB Vault live and demonstrating in session
  • Two new members — Scott Delinger (technology policy analyst at Compute Ontario) and Tom join; both up and running with Claude
  • Skills anatomy deep-dive — the simplest possible explanation: it’s a folder with a skill.md file, and then optionally resources, templates, and scripts
  • Brand Writing Team built live — full 6-role skill with separation of concerns, quality gates at every handoff, parallel sub-agents
  • The audit command — Lou’s most practically valuable technique of the session: at the end of any iterative session, “audit our conversation and make sure everything we decided or fixed is reflected in the skill”
  • LKB Vault walkthrough — the living knowledge base fully operational: one year of transcripts processed into insights, commands, skills, article briefs, entity profiles, and voice of customer library
  • Don Back’s real-world skill — 20 participants, 4 diagnostic instruments, AI meta-analysis across them, producing coaching notes that “blew him away”
  • Rate limit reality check — chat thread on who’s hitting the $100 plan and why it might now make sense

Lou’s Week: Three Things in Build Mode

Lou opened with three things he’d built since last session — each one a distinct architectural advance on ideas the group has been following.

First: The Trello Orchestration Skill. Lou had previously shown a Trello-based workflow for tracking agentic pipelines. This week he generalized it. Instead of being specific to product requirement docs, it now generates the appropriate Trello board structure for any workflow you describe. Tell it what you want to automate, describe the workflow, and it creates the Trello board, determines the right columns and cards, figures out where parallelism is possible, and schedules everything accordingly. Going into the GitHub repo over the weekend for members to test.

Second: The Ambient Intelligence Bus. Lou advanced the “skills in every folder” idea from last session into something more architectural. Not a hierarchy (one orchestrator at the top, workers below) but a bus: any skill can talk to any other skill, acting as orchestrator or worker depending on context. The architecture is peer-to-peer. Memory issues still being worked out, but the framework is taking shape — and it’s designed to run entirely within Claude Code, without requiring external scripts, VPS, or API infrastructure.

Third: The LKB Vault. Lou took Andrej Karpathy’s published wiki spec, customized it for mastermind context (insights, commands, skills, article briefs, entity tracking), and ran it against a year of mastermind transcripts. He demonstrated the result live later in the session — see the full walkthrough section below.


AI Introduction Video — The Hierarchy of What Got Absorbed

Lou showed a 2.5-minute presentation he had prepared for Amy’s event that he’d run out of time to use. The workflow to build it: Lou and Claude talked through his notes and ideas, Claude scripted it, Lou asked Claude to convert the script to an HTML slideshow, then Lou recorded audio in 11 Labs and combined the two with ScreenFlow.

The presentation itself maps the evolution of AI capability as a sequence of absorptions:

  1. Search — AI found information faster and deeper than Google
  2. Generation — first drafts, outlines, documents: the work of producing shifted to directing
  3. Obedience — complex nuanced instructions reliably followed; the translation cost disappeared
  4. Reasoning — models absorbed the plan itself; you describe the destination, they figure out the route
  5. Multimodal — documents, images, audio, video ingested and interpreted; manual intake work stopped being yours
  6. Agents — browsing, running code, calling APIs, multi-step workflows without babysitting
  7. Orchestration — multi-agent coordination, managing complexity, sustained effort across hours
  8. Skills — your expertise, judgment, and proprietary way of thinking. That transfers too. But not automatically. You have to engineer it deliberately.

What’s left: supervision, direction, the human in the loop who decides where the system points and whether the output earns trust. Not a consolation prize — the highest leverage role in the stack.

“Everything else scaled. You decide what it’s for.”

The reason Lou is so excited about skills: it’s the engineering move that ensures this final block — your irreplaceable edge — actually transfers.


Skills Anatomy — The Only Thing You Really Need to Know

Lou ran a grounding overview for newer members and as context for the live demo that followed.

The simplest possible description: a skill is a folder with a skill.md file in it.

The folder name is the skill name. The skill.md file contains the name, description (up to 1024 characters, written for semantic triggering — this is how Claude knows to call the skill automatically), and the instructions. Everything else is optional.

Optional additions:

  • resources/ — supplementary prompt files, reference documents, specifications; pulled into context only when needed (progressive disclosure)
  • templates/ — output format templates; instead of describing what you want the output to look like, you produce it and let the skill fill in the content
  • scripts/ — Python or JavaScript for deterministic tasks that don’t require AI inference (calculations, API calls, database queries)

Progressive disclosure is the key architectural concept: only the skill.md description loads initially. When the skill decides it needs a resource file, it loads it. When it’s done with that file, context moves on. You can have 50 resource files and never have more than 2 or 3 in context at once.

The description field matters more than most people realize: it’s the semantic signal Claude uses to automatically identify when to invoke the skill, even if you don’t call it by name. Make it keyword-rich, describe the triggers, specify when it should activate.

Deep Dive: Insight - Separation of Concerns in Skills — One File, One Job — the architectural principle that explains why you want resources in separate files instead of one big skill.md.


Brand Writing Team Built Live — A Full Skill From Scratch

The centerpiece of the session: Lou built a 6-role writing team skill in real time, starting from a blank Claude Code session and ending with a functioning skill that produced a 750-word article.

The conversation-first approach: Before writing any files, Lou had Claude ask him clarifying questions: what kind of writing? What’s the goal? What are the failure modes? What should the voice sound like? Lou answered, Claude synthesized, and then — using the Skill Creator skill — built the entire folder structure.

The six roles: Strategist, Researcher, Outliner, Drafter, Editor, and (pulled in from Lou’s existing skills) the Skeptic. Plus a brand voice profile, an audience avatar persona, and a handoff contract between roles.

Separation of concerns in action: Each role gets its own file in a resources/ folder. The skill.md is purely the orchestrator — it reads the request, determines which roles to activate and in what order, calls each, and handles the handoffs. When running “just do the research,” only the researcher file loads. When running the full pipeline, each role loads sequentially (with some parallel execution where appropriate).

The quality gate addition: Mid-demo, Lou added a requirement: at every major handoff, build an evaluation rubric appropriate to that stage’s function, score the output against it, and revise up to 3 times until it reaches 9/10. During the test run: Strategist scored 9. Outliner scored 9. Skeptic scored 9. The final article was better than typical AI output — citing research, taking an unusual psychological angle, avoiding cliché.

The audit command: At the end of the session, before finalizing the skill, Lou said: “Do a final audit and make sure any feedback I provided and any operational things we talked about are reflected in the skill.” Claude read back through the conversation, identified fixes and additions, and codified them as permanent skill updates.

Deep Dive: Insight - The Quality Gate Pattern — Embed 9-10 Self-Evaluation at Every Pipeline Handoff Deep Dive: Insight - The Conversation Audit Technique — Never Let a Session’s Fixes Evaporate New Command: conversation-audit-codify — the reusable prompt pattern from this technique


The LKB Vault — A Year of Transcripts, Now a Living Knowledge Base

Lou walked the group through the fully operational LKB (Living Knowledge Base) vault. Everything he’d described in previous sessions as a vision was now running — and the group could see it.

The setup was simpler than anyone expected:

  1. Take Andrej Karpathy’s published wiki spec
  2. Customize it: add context about mastermind sessions, define output types (insights, commands, skills, article briefs, entity profiles)
  3. Ask Claude to create the folder structure
  4. Drag all transcripts into the raw/transcripts/ folder (in Finder — no code required)
  5. Open Claude Code, change directory to the vault, say: “Follow the instructions in schema.md”

That’s it. The schema (functionally a skill.md) handles everything from there.

What the vault produces from each transcript:

  • A session recap (500-word summary covering every topic)
  • Insight pages for high-value moments, with cross-links and forward links
  • Commands — extracted reusable prompt patterns
  • Skills — multi-step workflows inferred from conversation
  • Article briefs — scored proposals for which insights could become published articles
  • Entity profiles — for participants who contribute 3+ times, a list of their interests, contributions, and connected insights
  • Voice of customer library — organized by theme (AI overwhelm, implementation paralysis, tool fragmentation, knowledge disorganization, voice and authenticity)

The VoC library as a content engine: Lou noted that all member quotes — organized by pain point — can flow directly into GEARS schema and article topic selection. He’s writing articles sourced from the actual language members use to describe their problems. The result: content that resonates because it’s built from what the audience actually says, not what a content strategist assumes they mean.

The article brief → writing team pipeline: “I take this brief, and I give it to my writing team. And I’ve got articles coming out.” The LKB is not an archive. It’s the front end of an active content production system.

Why it works without RAG infrastructure: The wiki link structure — short insight pages, cross-linked, with forward and backlinks — behaves like the “hybrid document encoding” approach in RAG. Claude finds the most relevant 100-line page, follows its links, builds context from related pages. No embedding, no database, no API. Just markdown files on disk.

Donald in chat: “I no longer work directly in Obsidian. I only use Claude Code since it’s so much more convenient — Claude Code interacts with my Obsidian vault so much better than me.”

Deep Dive: Insight - The Living Knowledge Base in Action — From Transcript to Intelligence Graph


Don Back’s Multi-Instrument Coaching Skill

Don came with a real, in-progress implementation that the group learned from in a different way than Lou’s demo — because Don was building for an actual client cohort, not for a demonstration.

He’s onboarding 20 participants in a 4-month professional development program. His data collection for each participant: Myers-Briggs (communication style), OCEAN/Big Five (world perception), Career Claimers Index (his proprietary tool — academic vs. industry readiness + current agency level), and a 15-minute structured interview transcript.

He started manually, running Claude through the data for 2 test cases before building the skill. The results “blew him away” — particularly when Claude noticed that two participants (who happen to be in a relationship) referenced each other in their interviews, and used each profile to confirm and extend the other.

Building the knowledge base behind the skill: Don noted that as he worked with Claude to develop the workflow, Claude kept asking him foundational questions — where does this index come from? Do you have the background document for this framework? You referenced these laws — bring them in. Claude was building the reasoning layer that would make every future run more accurate. This is the skill-building process working as intended.

The human-in-the-loop design: The client-facing report never gets generated until Don has reviewed and approved the coaching insights. AI produces analysis; coach approves delivery. This is the right boundary for high-stakes relational work.

Don in chat: “Now coaching sessions are deep market research and knowledge generation. They are a resource, not a task.”

Deep Dive: Insight - The Multi-Instrument Client Profile — AI Meta-Analysis Across Diagnostic Data


Scott Delinger — Welcome, and the HPC Use Case

Scott introduced himself — he’s a technology policy analyst at Compute Ontario (government-funded HPC provider serving Ontario post-secondary research institutions), with a background in data analysis and translation between highly technical systems and executive audiences.

His current AI application: he has access to log files from hundreds of millions of HPC jobs run across all Ontario universities and research hospitals. Mining that data at scale — answering questions like “what types of jobs is a given research group running at what times, and why?” — is not something you do by hand. Claude + Gemini let him get out in front of questions before they’re asked.

Scott’s assessment of where skills put him: “That level of skills would get me out in front of my colleagues’ ability to ask for things.” He’d been doing this work manually in Google Data Studio (now Looker Studio), building dashboards, automating slide decks for the CEO. Skills turn that into something replicable and scalable.

Lou’s extension to Don’s use case suggestion: “You could do the same thing — executives could have that knowledge base available, say, here’s a company I’m talking to, use what we have about our product and company, do a custom presentation. Do it on the way to their office.”


Context Management — When to Override Claude’s Default Memory

Dirk raised a real observation: when working in a Claude project, he’d noticed that new chats seem influenced by the memory of previous chats in the same project — and that he sometimes gets better results starting fresh and moving the output into the project afterward.

Lou confirmed: that’s how it works by default, and it’s the right default most of the time. But when you want first-principles work without existing memory influencing it, you can tell Claude explicitly: “Ignore all previous conversations — just use what’s in the project files, not the chat history.”

Kasimir added the technical lever: in Claude’s project settings, you can turn memory off entirely for a given project, eliminating the need to state it every time.

The general principle: once you understand any system’s default behavior, the skill is knowing when not to default to the default.


Research Knowledge Wiki — The Vision for the Daily Briefing System

Lou described a system he’s building that extends the LKB concept beyond meeting transcripts to continuous research intake:

A daily scheduled task reads his social feeds and email subscriptions, consolidates everything into a quick read, then — with one additional step — extracts the original articles from every link, summarizes them, and files them into a research knowledge wiki. The wiki learns what he’s interested in through iteration (this is interesting to me because X; not interested in that because Y). Eventually: one folder to browse instead of all socials + email. AI-curated, AI-organized, AI-updated.

The writing pipeline integration: when writing an article on a topic, he goes to that folder and asks Claude to surface whatever relevant research is there. The research wiki becomes the source material for the content pipeline.

This is still in development — but the architecture is identical to the LKB: a schema/skill file, a folder, a scheduled task.

Scott in chat: “It could say, focus on this — in the next couple of months, there may be a tool that comes out. It would enable you to go tenfold on whatever it is you’re doing.”


Community Corner

Donald’s observation — “I didn’t know I had a ‘CIA’ (now AIMM) file” — was an endearing reminder that the LKB is building profiles for all active members. The vault knows who you are and what you care about.

Rate limits in the chat thread — the group had a spirited discussion about hitting Claude’s rate limits. Ken’s summary: “First time I hit rate limits, I had to touch grass. Second time: 20/month, he feels like “an uberwizard” and is “about to get FAR faster, with skills.” Donald: “When I first heard of the $100+ AI plans, I thought, ‘Who on earth would ever need that?’ Now I consider it daily.”

Elizabeth’s note — she observed that the Skill Creator skill sometimes skips the eval procedures for finalization and testing. Lou acknowledged this and addressed it in the live demo by explicitly adding the quality gate requirement to the skill. Good field report.

Kasimir’s check-in — he was joining from Armenia with poor internet, video off. Despite that, he offered the practical note about turning memory off at the project level.


  • Tiago Forte (Second Brain book) is now working on implementing AI for his second brain setup — Scott’s note, relevant context for the knowledge management discussion
  • Donald’s TTS setup — Donald installed Fish-speech and XTTS (open-source ElevenLabs equivalents) on his PC. Fish-speech for voice cloning, XTTS for speed. “Free audio forever.”
  • No external URLs shared

Tools & Assets Created This Session

  • conversation-audit-codify — The Conversation Audit & Codify command: end-of-session protocol to make every iterative session’s fixes permanent

Try This Before Next Session

Run the Conversation Audit at the end of your next iterative AI session.

  1. Work with Claude on anything that involves iteration — building a skill, debugging, refining a prompt, developing a document
  2. At the end of the session, before closing, type: “Audit our conversation. List everything we decided, fixed, or figured out. Then update [the skill / the code / the document] to reflect those items as permanent root-cause changes.”
  3. Review what Claude updates and where
  4. Notice what the next session feels like when you start from an updated baseline instead of the same one you started from last time

Then: share what happened. Did it catch things you would have missed? Did the next session feel different?


Open Threads

  • What does the Peer-to-Peer Agent Bus look like when the memory issues are resolved? Is the peer-to-peer model actually more useful than hierarchical orchestration for the use cases members have?
  • Don’s 20-participant cohort: as the multi-instrument profiling skill matures, what patterns emerge across the cohort that no individual profile reveals?
  • Scott’s HPC data: what kinds of questions become answerable when you can run Claude against hundreds of millions of job log entries, and which ones does it get wrong?
  • The Research Knowledge Wiki: is teaching a system your information preferences through iteration fast enough to be practical, or does it require too many cycles before it’s reliably curated?

Next session: TBD


← Previous