PowerUp AI Mastermind — February 12, 2026

Wikidata, the /grounded command, and a hard argument about the future of expertise

“The information arbitrage model is over. You are no longer the expert because you know things — you are the expert because of what you can do with AI and your judgment.” — Lou


This Week in 30 Seconds

  • Lou returns — back from absence with live demonstrations of two tools he’s been building
  • Wikidata semantic matching — Lou demo’d connecting personal glossary terms to the Wikidata knowledge graph for GEO authority signaling
  • The /grounded command — a new slash command that locks AI responses strictly to a specified context, eliminating hallucination in evidence-dependent work
  • Expert identity shift — an extended discussion on what it means to be an expert when AI commoditizes information access
  • The AIMM GitHub — members got invited to the shared skills repository; walkthrough of how to clone and contribute

Lou Returns: Two Live Demonstrations

Lou came back energized, with two tools ready to show the group. The session opened with member check-ins, but quickly moved into demonstration mode — which is where February 12th earned its score. Both the Wikidata demo and the /grounded command demo were practical and immediately applicable, not theoretical.

The thread connecting both tools: authority in the age of AI depends on being findable by machines that think in structured relationships, not just humans who search for keywords. The Wikidata work makes you legible to the knowledge graph. The /grounded command makes your AI outputs trustworthy enough to stake your reputation on.

💡 What This Means for You

The session’s two demonstrations were complementary: one is about getting AI systems to know who you are and what you stand for (GEO authority), the other is about making your AI-assisted work reliable enough to publish (grounded outputs). Both serve the same goal: being a trustworthy source in an AI-mediated world.


Wikidata Semantic Matching

Lou demonstrated connecting terms from the GEARS ontology to Wikidata — the structured knowledge graph that underlies much of what AI models know about the world. The mechanics: for each concept in your glossary or framework, you map it to the closest Wikidata entity using same-as, related-to, or alternate-name relationships. This creates a machine-readable bridge between your proprietary terminology and the broader knowledge graph.

Why this matters for GEO: when an AI engine encounters your content and sees structured relationships that connect your concepts to known Wikidata entities, it can more confidently include you in answers. You’re not just a website with text — you’re a node in a graph the AI already trusts.

Lou walked through the specific syntax for embedding these relationships in JSON-LD schema, and the group discussed which types of concepts benefit most from Wikidata linkage. Frameworks and methodologies with clear conceptual equivalents in Wikidata gain the most; highly proprietary concepts that have no direct equivalent gain less, but can still benefit from related-to connections.

Donald asked about the mechanics of keeping Wikidata mappings current as your frameworks evolve. Lou’s answer: version the ontology, and re-run the matching when concepts change substantially. The initial mapping is the expensive part; updates are incremental.

Deep Dive: Insight - Anchor Your Authority in the Knowledge Graph Before You Need It — Why semantic matching to Wikidata is the structural foundation of GEO authority, and how to do it before it feels urgent.

💡 What This Means for You

Identify your three most important proprietary concepts or frameworks. Search Wikidata for the closest equivalent entities. Even if the match is imperfect, a related-to relationship is better than nothing. This is the first step toward making your expertise legible to AI engines.


The /grounded Command

Lou introduced a slash command he’d built that changes how AI handles evidence-dependent queries. The core problem it solves: when you ask an AI to answer a question about a specific document, client, or situation, it often drifts — mixing in general knowledge, hallucinating details, or answering the question it thinks you’re asking rather than the one that’s actually answerable from the source material.

The /grounded command locks the AI to a specified context. When you invoke it with a source (a file, URL, glob pattern, web search, or the current conversation), the AI is constrained to answer only from that source. If the answer isn’t in the source, it says so rather than filling the gap with inference.

Lou showed three variants — Light, Standard, and Heavy — with progressively stronger constraints on the AI’s use of external knowledge. The Heavy variant is particularly useful for client work where accuracy is non-negotiable: it requires the AI to cite the specific passage supporting each claim, making it easy to verify outputs before sharing.

The group immediately saw applications: coaching session debrief, contract review, research synthesis, anything where you need to trust that the AI is staying inside the evidence. Kasimir noted it could function as a quality control layer for client-facing content.

Deep Dive: Insight - The Grounded Query Principle — Context-Locked Answers Reduce Hallucination and Increase Trust — The design principle behind context-locked AI responses and when to apply each variant.

💡 What This Means for You

Identify one recurring task where AI hallucination is a real risk for you — a task where wrong answers have real consequences. Apply the /grounded command with the Heavy variant. Notice whether the AI’s outputs feel different, and whether the citation requirement catches anything unexpected.


The Expert Identity Shift

The most provocative thread of the session wasn’t a demo — it was Lou’s extended argument that the “information arbitrage” model of expertise is dead. The old model: you are valuable because you know things others don’t. The new model: you are valuable because of what you can do with AI and your judgment about what matters.

Lou was pointed: coaching, consulting, and knowledge work that derives its value purely from the scarcity of information is in genuine trouble. AI doesn’t just democratize access to information — it makes the AI itself capable of synthesizing and applying that information. The new moat isn’t what you know, it’s your proprietary data (client relationships, frameworks, methodologies, accumulated judgment) and your ability to orchestrate AI in service of client outcomes.

The group pushed back productively. Don Back noted that many clients still can’t evaluate quality — they need someone to trust, and that trust relationship is a moat that AI can’t replicate. Bally observed that the coaching relationship itself — the human connection, the accountability, the personalization — remains distinctly human. Lou agreed on both counts, but held the broader point: if your value proposition is primarily informational, it’s being compressed.

This discussion didn’t produce a clean resolution, and it wasn’t meant to. The question it raised is one every knowledge entrepreneur needs to sit with: where, specifically, does my value come from — and which parts of that are durable?

💡 What This Means for You

Write down your current value proposition in one sentence. Then ask: how much of this depends on information I have that others don’t? How much depends on relationships? How much depends on judgment that’s hard to codify? The balance of that answer tells you where your work is most defensible.


AIMM GitHub: Skills Repository Walkthrough

Lou walked members through the shared AIMM GitHub repository — how to access it, clone it, and contribute skills. For members who hadn’t yet accepted their invitations, Lou re-sent them during the call.

The practical workflow: clone the repository to your local machine using VS Code, navigate the skills and prompts folders, add your own skills in the correct folder structure, then commit and push. Lou demonstrated the entire process, including how to use the VS Code GitHub panel for members unfamiliar with command-line Git.

The key architectural note: anything you want Claude to use when running a skill goes inside the skill’s folder. Supporting documents and reference materials that aren’t part of the skill’s execution go outside it. The distinction matters for keeping context lean when the skill loads.

💡 What This Means for You

If you haven’t yet accepted your AIMM GitHub invitation, do it this week and clone the repository. Even if you don’t have a skill ready to contribute, browsing what others have built is a fast way to see what’s possible and adapt it for your own work.


Community Corner

Donald Kihenja asked a sharp question during the Wikidata demo about how to keep semantic mappings current as frameworks evolve — a practical concern that shows how deeply he’s thinking about the long-term maintenance of a GEO architecture, not just its initial setup.

Kasimir immediately identified the /grounded command as a QC layer for client work, connecting it to his existing quality control framework. The cross-referencing of concepts from session to session shows how much compounding is happening in this group.


  • AIMM GitHub Skills Repository — shared repository for mastermind members; Lou provided invitations during the call (private link via GitHub invite)
  • “A Simple Prompt Trick” — the Towards AI article discussed last session, referenced again this week: pub.towardsai.net

Try This Before Next Session

Apply the /grounded command to one real piece of client or professional work.

  1. Pick a task where accuracy is important and hallucination is a real risk — a client summary, a contract review, a research synthesis.
  2. Invoke /grounded with the relevant source material.
  3. Use the Heavy variant and require citation for each claim.
  4. Before delivering the output, verify two or three citations against the source.
  5. Note: did the AI catch anything it would have gotten wrong without the constraint? Did it flag any gaps where the source didn’t support the question?

Bring your findings to next session.


Open Threads

  • Where exactly is the line between expertise-as-information (compressible by AI) and expertise-as-judgment (durable)? How do you communicate that distinction to clients who may not understand it yet?
  • How do you maintain Wikidata semantic mappings as your frameworks evolve — and at what point does an ontology become stale enough to hurt rather than help your GEO authority?
  • What’s the right cadence for auditing and updating system prompts to prevent drift accumulation over time?

Next session: February 19, 2026


← Previous · Next →

Derived Artifacts