PowerUp AI Mastermind — January 15, 2026
Dirk’s ontology breakthrough, Kasimir’s multi-model synthesis live experiment, and the cognitive fingerprints discovery
“My clients don’t Google ‘executive search.’ They Google symptoms of not knowing what to do when change threatens their position. That’s a completely different layer — and the AI found it.” — Dirk Ohlmeier
This Week in 30 Seconds
- Ontology below the keyword layer — Dirk’s gut-punch discovery: clients search at the experience level, not the category level
- Symptom-to-root-cause reversal — Don extended last week’s insight with a precise articulation of the causal chain
- Multi-model cognitive fingerprints — Kasimir ran the same prompt through three models; they gave radically different answers for structural reasons
- Golden nugget synthesis rule — only add, never omit or summarize away; the winning answer is always a superset
- Forked Claude skills as context isolation — run a sub-agent without polluting your working conversation
- Hallucination control loop — require source URLs from every factual claim; use Perplexity to validate
Dirk’s Ontology Breakthrough: The Beneath-the-Keyword Layer
Dirk Ohlmeier arrived at this session with a discovery that stopped the room. He had been working with an ontology-specialist prompt alongside a separate SEO consultant review of his executive search website. The combination produced something neither approach could produce alone: a map of what his clients are actually experiencing before they would ever type his category name into a search engine.
His clients don’t search “executive search firm.” They search for the symptoms of a specific fear: not knowing what to do when change threatens their position. Restructuring, leadership transitions, being passed over — these are the felt experiences that precede the category query. And Dirk’s AI-assisted mapping had traced the full causal chain from felt symptom to eventual search to the solution Dirk offers.
He described the clarity as “arriving like a gut punch” — energising and humbling at the same time. Two decades of expertise, and the AI had surfaced a client map he’d never explicitly articulated before. Not because the insight was new, but because the process forced the articulation.
Lou noted this as one of the strongest demonstrations of what ontology work actually does: it doesn’t tell you what to sell, it tells you where the client is standing when they realise they have a problem. That location — the symptom layer — is where GEO authority must be established.
“GEO gives AI the ability to infer intent beneath the query — not just match keywords. That’s why the beneath-the-keyword layer is where the authority lives.” — Lou
Deep Dive: Insight - Ontology Reveals What SEO Hides - The Beneath-the-Surface Client Map — how ontology-specialist prompting uncovers the experience layer that keyword SEO misses, and why AI engines reward content at that layer.
💡 What This Means for You
Run the ontology-expert prompt on your own niche: trace the causal chain from your client’s felt symptom — the thing they’d describe to a friend — back to the root cause you actually address. Each step in that chain is a content opportunity.
Don’s Symptom-Root Cause Articulation
Don Back built immediately on Dirk’s breakthrough with a precise articulation of the underlying principle — one that Lou identified as one of the sharpest moments in the session and said he needed to think through himself.
Don’s formulation: the reason clients search at the symptom level rather than the category level is that symptoms are personal and immediate, while categories are abstract and professional. A client experiences “I can’t get my team to follow through” long before they would think “I need a leadership coach.” The distance between those two descriptions is exactly the gap that most coach marketing falls into.
The reversal process Don described: start from the known outcome (your expertise, your framework, what you solve), then reverse-engineer backward to every felt experience that precedes it. Each reversal step surfaces a different symptom — and each symptom is a different keyword cluster, a different AI query pattern, a different content opportunity.
Lou added that this reversal is also how AI intent inference works. When a model reasons about a query, it’s doing something structurally similar: tracing from the presented symptom backward to the probable underlying need. Content that maps this chain gets cited because it does the reasoning work the model is trying to do.
💡 What This Means for You
Take one solution you offer. Ask: “What does a client experience in the week before they realise they need this?” Write down the 3–5 experiences they’d describe. Those are your symptom-layer content hooks.
Kasimir’s Multi-Model Synthesis Experiment
Kasimir Hedstrom brought the session’s most immediately actionable workflow contribution: a live experiment that showed, concretely, why running the same prompt through multiple AI models is not redundant — it’s structurally necessary for high-quality synthesis.
His setup: he took a strategic question about his ICP (ideal client profile), structured using the Brooke Castillo coaching framework, and submitted it simultaneously to Claude, ChatGPT, and Gemini. The responses were radically different — not in minor stylistic ways, but in what each model considered the most important factor.
The cognitive fingerprints he observed:
- ChatGPT — oriented toward relational and team dynamics; tended to surface human factors and interpersonal considerations
- Gemini — oriented toward mathematical and technical structure; pushed toward frameworks, metrics, and systems
- Claude — oriented toward nuanced synthesis; surfaced tensions, edge cases, and things that were true in context but not universally
None of these is “more right.” They’re different lenses. The real value emerged when Kasimir fed all three outputs to a fourth model as a neutral synthesizer — asking it to identify only what each model said that the others missed. The synthesis was richer than any single model could produce.
“The winning answer is always a superset. You’re not picking the best model. You’re mining each one for what the others couldn’t see.” — Kasimir
Deep Dive: Insight - Run Your Prompt Through Multiple Models and Synthesize at the Top — how parallel multi-model deliberation with a neutral synthesizer produces answers no single model can reach.
💡 What This Means for You
Take one open strategic question you’re currently sitting with. Submit it to Claude, ChatGPT, and Gemini simultaneously with identical prompts. Then paste all three answers into a fourth model and ask: “What did each model say that the others missed?” Use only those unique contributions.
The Golden Nugget Synthesis Rule
The multi-model experiment produced a secondary insight that may be more durably useful than the experiment itself: the rule about how to synthesize correctly. The temptation when combining multiple AI outputs is to summarize — to find the common thread and express it. Kasimir and Lou both named this as the failure mode.
The correct synthesis operation is the opposite: identify what is unique in each response — the claim, the consideration, the angle that the other models didn’t surface — and add only those unique elements to the synthesis. The synthesis is a superset of unique contributions, not a distillation.
Lou named this the “golden nugget rule”: every model that answered the question had something the others missed. The job of the synthesizer is to find those nuggets and add them without discarding anything that was already in the set.
This rule applies beyond multi-model synthesis. It applies whenever you’re combining AI outputs from multiple passes, multiple drafts, or multiple perspectives. The failure mode — summarising away unique signal — produces outputs that feel polished but are actually shallower than any of the inputs.
Deep Dive: Insight - The Golden Nugget Synthesis Rule — Only Add Never Omit When AI Synthesizes — why summarising AI outputs loses signal, and the correct synthesis operation that preserves everything valuable.
💡 What This Means for You
Next time you combine AI outputs, don’t summarise. Ask: “What did each source say that the others didn’t?” Add only those unique contributions to a single synthesis document. Compare the result to a summarised version. The difference is signal lost.
Forked Claude Skills: Context Isolation for Sub-Agent Work
Lou introduced a technical pattern that had come out of his own Claude Code usage this week — one with significant practical implications for anyone building complex AI workflows. The pattern: use a forked Claude skill (a sub-agent launched in isolated context) when you need to run a heavy sub-task without polluting the working conversation.
The problem it solves: in long Claude conversations, context accumulates and degrades. The more conversational history a model carries, the more it gets pulled toward patterns established early in the conversation — even when the current task requires fresh thinking. If you run a major sub-task inside a long working conversation, the sub-task gets contaminated by all the prior context.
The fork isolates the sub-task: launch a new Claude instance with only the information that task needs, run it to completion, bring back only the summary result. The parent conversation stays clean. The sub-agent works with full context on its specific task.
Lou noted this is particularly valuable for transcript processing, insight extraction, or any task where you want the AI to reason from scratch rather than from the accumulated context of the conversation it’s already in.
Elizabeth Stief immediately saw the application for her skill-creator workflow: feed documents to a forked skill, let it build the skill, bring back the output — without any of the prior conversation affecting the quality of the skill it produces.
Deep Dive: Insight - Forked Skills as Context Isolation — Run Sub-Agents Without Polluting Your Conversation — why forking a Claude sub-agent for heavy sub-tasks produces cleaner outputs than running them inside a long working conversation.
💡 What This Means for You
If you have a Claude conversation that’s been running for hours and you need to do a fresh analytical task — don’t do it in that conversation. Fork it: start a new Claude instance, give it only what it needs, bring back only the result. Your main conversation will stay coherent.
Hallucination Control: Source URLs and the Perplexity Validation Loop
Lou closed the main body of the session with a practical protocol the group should apply to any AI research workflow. The problem: AI models confidently assert facts that are wrong. The volume of confident wrong assertions increases when working with complex multi-model workflows, because errors in one layer can propagate through synthesis.
His protocol has two steps:
- Require source URLs — for any factual claim in an AI output, require the model to supply a source URL. If it can’t supply one, treat the claim as unverified.
- Perplexity validation loop — run any claim you intend to act on through Perplexity, which retrieves real sources. If Perplexity can’t find a source that confirms the claim, don’t use it.
The practical application: when building content from AI synthesis, run the factual claims through this loop before publishing. When using AI for strategic decisions, run the supporting evidence through this loop before committing.
Lou noted this is particularly important for GEO content: if you publish AI-synthesised content that contains hallucinated claims, and that content gets cited by AI engines, you’re distributing misinformation at scale. The validation loop is the minimum viable quality control.
💡 What This Means for You
Add one line to your standard prompting practice: “For every factual claim in your response, include a source URL.” Start noticing which claims come back without sources. Those are your hallucination candidates.
Go Deeper:
- Perplexity AI — AI search for source validation (perplexity.ai)
Community Corner
Dirk Ohlmeier joined from home recovery after a knee operation — cheerfully attributing the surgery outcome to AI-assisted medicine. His breakthrough with the ontology prompting arrived while he was laid up, which he described as the most productive forced pause he’d had in years.
Elizabeth Stief shared a rapid skill-building workflow she’d discovered: dump a set of documents into a Claude conversation, prompt it to use the skill-creator skill, and it will encode the documents as a reusable skill automatically. She noted this is dramatically faster than writing skills manually, and works particularly well when you have existing documentation or SOPs you want to make AI-accessible.
Don Back used Claude to create a custom lead magnet template when no suitable template existed — feeding it content and getting a formatted output faster than searching or hiring on Fiverr. A practical reminder that AI isn’t just for research and writing; it’s a template factory for anything you can describe clearly.
Coach Bally Binning recommended Manus as an alternative to Gamma for design-forward outputs when you want a break from Gamma’s style.
Links Shared in Chat
- Amy Yamada — “How AI Recommends Experts” webinar replay — (amyyamada.mykajabi.com) (shared by Elizabeth Stief)
- Claude Skills documentation — Using Skills in Claude — (support.claude.com) (shared by Lou)
- Claude Skills — How to Create Custom Skills — (support.claude.com) (shared by Lou)
- Claude Agent Skills — Best Practices — (platform.claude.com) (shared by Lou)
- Claude Code Skills documentation — (code.claude.com) (shared by Lou)
- Claude Agent Skills — Overview — (platform.claude.com) (shared by Lou)
- Manus — design tool alternative to Gamma (recommended by Bally Binning)
- Perplexity AI — used for hallucination validation (perplexity.ai)
Try This Before Next Session
Run a multi-model synthesis experiment on a real strategic question. This takes 30 minutes and will likely change how you think about single-model AI responses permanently.
- Choose one open strategic question you’ve been sitting with — something where you genuinely don’t know the best answer.
- Write a clear, specific prompt framing the question.
- Submit the identical prompt to Claude, ChatGPT, and Gemini. Record all three responses.
- Open a fourth AI session. Paste all three responses and ask: “What did each response say that the others missed? List only the unique contributions from each.”
- Use only those unique contributions to build your synthesis.
Bring your question and your synthesis result to next session. The diffs between models will be instructive.
Open Threads
- What is the right automation architecture for multi-model synthesis — parallel pipelines via N8N, or sequential cross-critique?
- At what point does multi-model deliberation produce diminishing returns compared to single-model depth?
- How do you encode the ontology-expert prompt into a reusable skill that any group member can run on their own niche?
- How does the beneath-the-keyword layer change the way you structure LinkedIn content versus website content?
- Claude Code: upcoming topic — the Max plan is required; members to explore before next session
Next session: 2026-01-22
Derived Artifacts
- symptom-layer (Symptom Layer — Dirk Ohlmeier + Don Back on pre-awareness search)