2025-11-27 AI Mastermind
Table of Contents
- Insight - Choose Your Abstraction Layer Before You Build
- Insight - The Psychographic FAQ as Authority Infrastructure
- Insight - Voice AI Works When It Removes Human Fatigue From Repetitive Interactions
Session Overview
The November 27 session opened with Kasimir Hedstrom sharing his experience building a working interface in under an hour using Claude Code, the Anthropic CLI. This sparked a substantive discussion about the “abstraction ladder” of AI tooling — from raw code to vibe coding to N8N to Make.com to end-user apps — and the importance of honestly placing yourself in that hierarchy before choosing a tool. Lou offered a balanced assessment: vibe coding is genuinely powerful for those with technical background, but it’s not the hands-off magic that social media demos suggest. The ability to interpret errors, understand authentication methods, and manage technical environments remains essential at that layer.
The session then moved into what became its most forward-looking contribution: Lou’s introduction of the Psychographic Hub concept — a GEO-optimized FAQ page built from Ideal Client Handbook data. Lou described vibe-coding this application in collaboration with Grok, producing a tool that takes ICH output, extracts entities and causal relationships, generates ~400 questions in the client’s authentic voice, and embeds them in JSON-LD schema for AI engine discovery. The claim: Google and the major LLMs begin citing you as an authority on your client’s exact questions, without backlinks or traditional SEO.
Bally Binning contributed a compelling real-world case study: the DORA voice AI system deployed by a Newcastle startup with the NHS for post-cataract surgery follow-up. The results — patient wait times down from 35 to 10 weeks, 92% patient satisfaction among an older demographic — generated a rich discussion about where AI automation genuinely creates value: removing the human fatigue cost of high-volume, low-stakes repetitive interactions, so humans can concentrate their energy on the interactions that require genuine presence.
High-Signal Moments
- Lou’s taxonomy of AI abstraction layers: code → CLI → N8N → Make.com → end-user app; the importance of knowing which layer you’re at before choosing tools
- “It’s not as easy as we make it sound” — Lou’s honest pushback on vibe-coding hype, particularly on the hidden knowledge required to debug authentication, containers, and API integration
- The Psychographic Hub reveal: 400-question FAQ built from ICH data, embedding causal entity relationships in JSON-LD schema, targeting AI engine knowledge graph authority
- Lou’s lesson from his failed live demo: always specify requirements properly before coding, not during; “the foundation wasn’t laid properly” is a teaching moment applicable far beyond software
- DORA NHS case study: AI voice agent reduced post-surgery follow-up waits from 35 to 10 weeks; 92% satisfaction among older patients who knew it was AI
- Don Back: “If you cause additional work for humans, they will resist and kill it. Make their job easier and everybody wins.”
- Elizabeth Stief’s discovery: Perplexity artifacts can be saved directly to Google Drive — a small but practically valuable workflow shortcut
- Claude’s memory feature discussion: enabling cross-conversation context retrieval, eliminating the manual “handover artifact” process
Open Questions
- How do we validate that the Psychographic FAQ schema signals are genuinely being picked up by AI engines, and on what timeline? What are the leading indicators to monitor?
- At what scale does voice AI become viable for a solo coaching practice — what’s the minimum viable implementation before it degrades client experience?
- How do we help clients who are genuinely non-technical engage with GEO strategy without overwhelming them with schema/JSON-LD complexity?
- What are the privacy and security implications of storing client-related knowledge in cloud-based AI memory systems?
- What’s the right workflow for capturing high-signal AI conversation artifacts into a persistent, searchable knowledge base?
Suggested Follow-Through
- Publish the GEO FAQ app to the mastermind group for testing — gather real-world feedback on schema quality and AI engine citation results within 4–6 weeks.
- Run the Abstraction Audit with at least two current PowerUp clients who are confused about which AI tools to adopt; use it to make a concrete recommendation.
- Enable Claude memory for all mastermind members not yet using it; share setup instructions in Telegram.
- Research voice agent platforms (Retell AI, Bland AI) as potential tools for coaching practice follow-up automation.
- Document the failed demo lesson as a teaching case: the importance of requirement specification before vibe coding, written up as a 1-page framework for the mastermind.
Additional Resources
Links & Tools Shared in Chat
- BBC article on AI (unspecified) — shared by Bally Binning; Don Back noted it was a “good article” and sent it along to others. Likely related to AI in the workplace or organizational AI adoption given the session’s themes.
Books & Articles Mentioned
- “Flash Boys” by Michael Lewis — mentioned by Don Back; context: referenced during a conversation about AI in high-stakes/regulated domains (Don had mentioned Alberta using ChatGPT to draft legislation, and the group was discussing financial/legal AI applications). Flash Boys covers high-frequency trading and systemic opacity — applicable as a cautionary reference for AI in consequential systems.
Ideas from Chat
- Voice bots for high-volume client calls: Kasimir suggested to Don Back — “Don, use voice bots for the calls” — in the context of Don managing a large number of client interactions. A practical pointer toward AI voice agent tools (Retell AI, Bland AI) explored in the session.
- Wabi Sabi in authority articles: Don Back raised “Wabi Sabi in authority articles?” — the concept of intentional imperfection as a quality signal in AI-era content. Potentially worth exploring as a counterpoint to over-polished AI-generated writing; human imperfection as authenticity marker.
- University of Alberta AI in law group: Don Back offered to connect Lou with a University of Alberta group that has been applying AI in law for ~10 years. A research/collaboration lead for AI legal applications.