PowerUp AI Mastermind — March 12, 2026

“Instead of you having to exercise the cognitive effort to think about which documents are related to other documents, you can leave that to Claude to figure out — and then Claude can inject the links wherever possible and kind of automatically create the graph for you.” — Lou


This Week in 30 Seconds

  • GEARS ingestions complete — Don, Casimir, Elizabeth, and Bally have their ontology HTML reports; session ID bug found and fixed during parallel processing
  • Three-layer Claude architecture — commands (prompts), skills (in-context tools), agents (autonomous isolated instances) — the design principles that determine which to use when
  • Roam Research live demo — Ri Ca walked the group through bidirectional linking, graph view, and why cross-link density takes months to become useful
  • Kasimir’s conversation consolidation workflow — export all Claude history, run AI topic grouping, collapse 5 conversations on the same topic into one searchable file in Obsidian
  • Claude project isolation — confirmed: Claude cannot search inside Projects; content trapped in project contexts is invisible to top-level memory
  • Tana introduced — Kasimir’s PKM tool of choice, structured knowledge with supertags and native local MCP support; several members intrigued
  • Don Back’s voice CRM — Hey Siri → AppleScript → Custom GPT → Google Apps Script → Google Sheet, built in one evening

GEARS Ingestions and the Session ID Discovery

Four members now have their GEARS ontology profiles — but the development process revealed something important about multi-agent systems. Lou ran four simultaneous GEARS sessions and realized he had no per-session ID. For a moment, it looked like all four sessions might be overwriting each other’s data. They weren’t — but it was a close call that surfaces a critical design principle: when running multiple AI sessions in parallel, you need isolation guarantees, not just hopeful defaults.

Lou has since added session ID functionality to GEARS and requests that members review their HTML ontology reports (sent via Telegram) and return corrections — not as a list of edits, but as a document written as if you’re telling Claude: “This is right, this is wrong, replace this with that.” Lou will ingest the corrections and update each ontology accordingly.

💡 What This Means for You

If you received your GEARS HTML report and haven’t reviewed it yet, do it this week. The more accurate your ontology, the better the schema and content generation that follows. Corrections written in narrative form are easiest for Claude to ingest directly.


Commands, Skills, and Agents — The Three-Layer Architecture

This was the technical spine of the session, and it changes how you should be designing any AI workflow. Lou walked through a framework he’s been refining from Workflow Pro (a Claude feature), but the principles apply to any sophisticated Claude implementation.

Commands live in your .claude folder as simple markdown files. You invoke them with / syntax. They’re your prompt portfolio — short, task-oriented, version-controlled. Think of them as the surface layer: you invoke a command, the command calls a skill or agent.

Skills run in-context. They inherit your conversation, use tools (MCPs, databases, bash terminals), and return output that appears in your main thread. Use a skill when you want the reasoning and output to be part of your conversation. Keep them focused on a specific task, even if that task is complex.

Agents run autonomously in an isolated virtual environment. They have no access to your conversation context unless you explicitly pass it to them. Use an agent when you want the work done without the process polluting your context — when you’d prefer just the result, not 40 options and a recommendation. Agents can be called from commands or skills, can run in parallel, and can return structured outputs via a contract you define upfront.

“If you need this skill to perform a specific action, but you don’t necessarily want to pollute your context with the content the skill generates while it’s processing — make it an agent.” — Lou

Lou’s design heuristic: if a skill would generate 40 brainstormed options and you only want the one recommendation, that’s an agent. If you need the full reasoning trail to inform the next step of your conversation, that’s a skill.

Lou also revealed he’s already built this multi-model roundtable as a Claude Code skill — it calls Gemini and Codex in headless mode from the command line and runs the full debate process natively inside Claude. The same multi-LLM deliberation, now accessible from one interface.

Deep Dive: Insight - Design AI Systems for Maximum Composability and Minimum Context Pollution — the principles for building AI workflows that don’t degrade as they scale.


Roam Research — Ri Ca’s Live Demo

Ri Ca took over when Lou stepped away for a fridge delivery and gave the group a first-hand look at Roam Research — the knowledge tool she’s been using for years. The core mechanic is bidirectional linking with double-bracket syntax: [[any phrase]] creates a page, and every page shows you all the other pages that link to it. You navigate by clicking links rather than by hierarchy.

Ri Ca showed how she uses it for stock tracking: each ticker symbol is a page, each date she bought or researched it links back, and clicking “music ticket” shows every concert ticket purchase across all dates. The graph view visualizes all links — a dense web that becomes genuinely useful only after months of consistent use.

The honest note from Ri Ca: it took her a few months to build enough links for the tool to feel worthwhile. The value is emergent, not immediate. Donald imported his entire Notion vault into Obsidian the same morning, demonstrating that the switching cost is low — what matters is building the linking habit.

Elizabeth asked if Obsidian worked the same way. Yes, and Donald confirmed Obsidian has the same graph view. The tools are architecturally similar; the differences are in interface, collaboration features (Notion wins there), and hosting philosophy (Obsidian is local-first, Roam is cloud-based).

Go Deeper:

  • Roam Research — networked note-taking with bidirectional linking; API confirmed for MCP bridge (roamresearch.com)
  • Tiago Forte’s “Building a Second Brain” — the methodology Ri Ca learned from, recommends Roam and Obsidian for non-collaborative use cases

Kasimir’s Conversation Consolidation Workflow

Kasimir brought the most immediately actionable system of the session — a workflow that solves a real and common frustration: you know you talked about something with Claude, but you can’t find it because it might be in chat, in Cowork, or buried in a project.

His solution: export all Claude history. Run an AI pass over it to group conversations by topic. Collapse five conversations about stock investment into one topical document. Save everything to Obsidian via MCP. From that point, retrieval takes 10 seconds instead of 10 minutes.

He also runs two scheduled scripts: 8 PM generates a day-in-review summary; 8 AM generates a morning briefing — what changed, what to focus on. The Obsidian vault becomes the persistent external brain that Claude can read and write to, while the scheduling layer means it’s maintained automatically.

Kasimir’s key discovery: Claude cannot search inside Claude Projects. Content in project contexts is completely isolated and invisible to top-level memory and search. His workaround is to periodically export project content and run an extraction prompt that pulls the key insights out, then save those to Obsidian where they become searchable.

“Instead of having 5 chats about stock investment, that process will make it that I have only one. It makes it easier to find and to read.” — Kasimir

Deep Dive: Insight - Your AI Conversation History Is a Knowledge Asset Worth Mining — why your accumulated AI conversation history is intellectual capital, and the workflow to reclaim it.


Tana — Kasimir’s PKM Upgrade

Kasimir also introduced the group to Tana, a structured knowledge tool he moved to after Roam Research and Logseq. The differentiators: supertags (structured fields attached to any node), recurring tasks, embedded AI, and — critically for this group — native local MCP support.

Several links shared in chat (see below). The native MCP integration means you can drive Tana from Claude without building a custom integration — it’s designed to be Claude-accessible out of the box.

Kasimir’s verdict: easier syntax than Roam (no double-bracket requirement), better for structured data and recurring processes, still has the cross-referencing behavior he values. He offered to put together a comparison guide for interested members.


Don Back’s Voice-Driven CRM

Don Back built a fully verbal database update pipeline in one evening — and shared it in the chat. The architecture: “Hey Siri” trigger → AppleScript → Custom GPT interpreter → Google Apps Script → Google Sheet database. You say your update out loud; it lands as structured data.

This is the synthesis of several things the group has been discussing: voice as an input modality, AI as interpreter, and automation as the bridge between capture and storage. The fact that it took one evening — and that it works — is the point.

Deep Dive: Insight - Voice-Driven CRM - Closing the Loop Between Thinking and Data — how to close the gap between spoken thought and structured data using AI as the translation layer.


Lou’s Multi-Model Roundtable Skill

Quick highlight: Lou built the multi-model debate process as a native Claude Code skill. It calls Gemini and Codex in headless mode from the command line, runs the full roundtable process, and returns the synthesis inside your Claude conversation. No platform-switching, no copy-pasting. For members using the multi-model deliberation approach, this is worth pulling from the shared GitHub.


Community Corner

Lou announced he’ll be in Mexico March 25 to April 1. He’ll try to join calls as usual but flagged internet uncertainty. Members confirmed they’d manage — and they clearly would.

Dirk quietly noted “Kasimir is talking about my approach ;)” in chat during Kasimir’s workflow demo. The ecosystem effect at work: member experiments inspire and validate each other’s approaches.



Try This Before Next Session

Build the weekly conversation consolidation habit — even once.

  1. Export your Claude conversation history (download from Claude’s settings as JSON or text)
  2. Open a new Claude session and paste or upload a week’s worth of conversations
  3. Prompt: “Go through these conversations and group them by topic. For each topic, combine the key insights, decisions, and action items from all related conversations into a single summary document. Give each topic document a clear title.”
  4. Save the outputs to your knowledge tool of choice (Obsidian, Notion, or just a folder)
  5. Note how many things you’d already forgotten that are now retrievable in one place

Do this once and you’ll understand why Kasimir considers it foundational.


Open Threads

  • At what scale does a personal Obsidian/Roam knowledge base become more hindrance than help — and is the answer different when AI is driving the linking?
  • How should the agent/skill architecture be taught to members who aren’t building tools but want to design better workflows for clients?
  • What is the right protocol for GEARS ontology review — independent corrections or live session with Lou?
  • Kasimir’s project isolation workaround is manual — is there a better architecture that maintains project isolation (for focus) while enabling cross-project retrieval?

Next session: March 19, 2026 — Special guest: Michael Simmons


← Previous · Next →

Derived Artifacts

  • SKILL (Conversation Consolidation System — Kasimir Hedstrom demo)
  • SKILL (Voice CRM Pipeline — Don Back demo)
  • belief-resistance (Belief Resistance — Dirk Ohlmeier’s breakthrough)