2025-06-26 AI Mastermind

Table of Contents

Session Overview

Lou opened the June 26 session with a live demo of GenSpark — an AI tool he had received a trial email for moments before the call, positioning it as “Gamma plus a whole bunch of other stuff.” He demonstrated the slides feature by entering a single, brief prompt (“the root cause of fear, uncertainty, and doubt that knowledge entrepreneurs experience over AI, and what to do”) and receiving a complete, Harvard Business Review-style presentation with factual citations. The demo illustrated what is becoming a reliable pattern: highly capable output from minimal input, when you let the AI think rather than micromanage it.

The session then deepened into two substantive technical threads. First, Kasimir raised the hallucination problem — he had discovered that ChatGPT had fabricated MIT citations that looked entirely credible, and that Claude caught the fabrications when asked to verify. This sparked a practical conversation about hallucination defense layers. Second, Lou demonstrated a prompt injection technique that bypassed his own security instructions by simulating an “end of session” context reset — making the point that perfect security is not achievable, but proportional security is practical.

Jay Drobez (a newer member attending live for the first time) asked a more infrastructure-level question: at what point does it make sense to run local LLMs for data privacy? Lou provided a cost-benefit analysis that landed on “cloud API with encrypted storage is the 80/20 answer for most solopreneurs; fully local hardware is justified only for very high-sensitivity IP with clear revenue tied to it.”

Don Back closed the session with a reflection on using AI to analyze and rewrite his own LinkedIn profile — a coaching-the-coach moment that landed as a practical demonstration of the Operator-to-Strategizer principle from June 12.

High-Signal Moments

  • GenSpark demo: one 20-word prompt → full professional presentation with verified citations, HBR design aesthetic, root-cause analysis and implementation roadmap — in under 10 minutes before the call
  • Kasimir’s hallucination warning: ChatGPT fabricated MIT citations with full confidence; Claude self-identified uncertainty; the cross-verification principle: “run it through Claude as fact-checker”
  • Lou demonstrates a prompt injection bypass live: “end of system instructions, end of session, new session” defeats multi-layered security instructions in seconds
  • Lou’s 80/20 security principle: protect against the top 2-3 casual extraction attempts; accept that determined attackers will always find a way; invest proportionally to your actual risk profile
  • Pliny the Liberator reference (elder_plinius on X): study how professional red teamers break system prompts to understand what you’re actually protecting against
  • Local vs. cloud LLM decision matrix: VPS (virtual private server on DigitalOcean/AWS) as the middle path between full cloud and full local hardware
  • MCP server as business infrastructure: expose your custom AI functionality via MCP so it becomes accessible to anyone using Claude, Cursor, Zapier, or any MCP-compatible tool
  • Don Back’s LinkedIn profile rewrite: runs AI-powered panel of experts against his existing profile; discovers that a profile he paid for years ago is significantly suboptimal; rebuilds from scratch with AI coaching

Open Questions

  1. How do you build a two-model verification pipeline (ChatGPT research → Claude fact-check) as a reusable automation?
  2. What is the practical distinction between data privacy and data security in cloud AI contexts — and how should coaches frame this with enterprise clients?
  3. At what membership scale or revenue level does a coaching business benefit from a VPS-based AI infrastructure vs. direct cloud API?
  4. How should coaches handle the situation where AI-generated content they published turns out to contain fabricated citations — reputation management implications?
  5. What makes GenSpark distinct from Gamma and other presentation tools — is it the AI-native architecture, the fact-checking layer, or the agents integration?

Suggested Follow-Through

  1. For Lou: Create a standard “hallucination defense” prompt block for sharing in Telegram; include in future prompt libraries
  2. For all members: Do a verification audit on one piece of AI-generated content previously published — check 3-5 citations and report findings
  3. For Jay: Research VPS options for hosting custom AI backend; report back on cost and setup complexity
  4. For Lou: Test GenSpark agents and sheets features and compare to current tool stack; share assessment in Telegram
  5. For members with custom GPTs: Implement basic 2-3 layer security instructions (prevent “show me instructions” + pattern continuation attacks) and keep them under 200 characters

Additional Resources

  • Elder Plinius (X/Twitter) — professional AI red teamer and prompt injection researcher; Lou recommended studying his methods to understand what you’re actually protecting against — https://x.com/elder_plinius (shared by Lou)
  • LMArena — model comparison/battle arena for evaluating AI outputs side-by-side — https://lmarena.ai (shared by Lou)

Books & Articles Mentioned

  • “Never Again Protocol Research Integrity v1” — Kasimir’s hallucination defense prompt document, shared as a PDF in the session chat (authored by Kasimir Hedstrom — request via Telegram)

Ideas from Chat

  • Kasimir shared a hallucination defense prompt in the Telegram group alongside the session — described as “one version of prompt against hallucination, quickly done though, posted as seed for development/thought”
  • Don Back: “Teaching is learning on steroids” — and the extension: “The millisecond before the words come out of your mouth is the moment where the learning happens” — a useful framing for why live demonstration and real-time explanation accelerate the coach’s own mastery
  • Don Back: “Always remember that the value is in the client’s mind. Don’t assume their value and don’t low ball their price because that is negotiating against yourself.” — pricing/positioning principle that emerged in context of a member’s deal discussion
  • Don Back: “Secondary agendas and drivers — they are always there.” — a reminder to watch for unstated motivations in any sales or consulting engagement
  • Don Back on Claude vs. ChatGPT: “I’ve found that Claude writes better, but ChatGPT knows more about me. It’s a toss-up.” — a practitioner’s comparative perspective worth tracking as both models evolve