2025-07-31 AI Mastermind
Table of Contents
Session Overview
Lou opened with a time-check reminder: July 31 is five months from year-end. The group was invited to use this as a forcing function for quarterly and monthly goal planning before diving into the session’s main content.
The session was the most architecturally ambitious of the series, structured around a single extended teaching presentation by Lou: the meta-prompting framework, its foundations in context engineering, and its practical application for knowledge entrepreneurs. This was a revisit and deepening of the “beyond prompting” presentation from an earlier session, now extended with live examples from Lou’s work that week.
The central argument: prompts are fragile and ephemeral. Models update, audiences shift, context changes, and a prompt that worked brilliantly last month may produce mediocre output today. The solution is not better prompts but better processes — systems that generate the right prompts dynamically, based on current context, regardless of what has changed. Lou framed this as a three-level hierarchy: task agents (do the work), domain orchestrators (know when to call which task agents and how to build their context), and system architects (interview the user and generate domain orchestrators). The goal is to operate at the architect level.
The live demonstration used an article on “process vs. prompts” as the working example: Lou showed how a 10-minute context engineering sequence (thesis statement → expert debate simulation → debate analysis → article request) produced a finished 1,500-word article that addressed all the relevant angles, pre-empted objections, and embodied a clear perspective — without Lou writing a word of the article itself.
The discussion also touched on model selection strategy: Lou’s current preference is O3 for reasoning/brainstorming (deepest thinking), Grok 3 for thinking + speed (reluctantly acknowledged as excellent), and Claude for writing and final polish.
High-Signal Moments
- “Prompts are fragile — they age, get filtered, or lose effectiveness as models evolve. Process, however, is adaptive.” — the foundational argument
- The three-level architecture: Task Agent → Domain Orchestrator → System Architect — operating at the Architect level means the system generates itself
- The bucket-to-pipeline metaphor: carrying water daily versus building the pipeline. “We’re not prompting at the pipeline level anymore — we’re prompting at the level of what’s a solution to the bucket problem.”
- Context engineering defined: designing a system that dynamically assembles the right context before task execution — not just adding context manually
- The 10-minute article demonstration: expert debate simulation → judge’s analysis → article generation. Total process: 10 minutes. Result: a publishable 1,500-word article
- “Activate the latent semantic space” — the expert debate doesn’t just gather information, it activates relevant model knowledge by bringing both sides of the conversation into context
- The coaching co-pilot example: client asks about pricing → AI classifies business model → identifies likely cognitive/emotional blocks → pulls relevant case studies → frames coaching question. The AI follows your methodology.
- Lou’s model selection logic: O3 for thinking, Grok 3 for thinking + speed, Claude for writing and code
- The insight that Claude’s context is lost when switching models within a session — practical implication for workflow design
- Year-end check-in: 5 months remaining; Lou prompted everyone to define monthly and quarterly goals now
Open Questions
- How do you build a Level 3 (System Architect) prompt that is genuinely domain-agnostic without becoming too abstract to be useful?
- At what point does the meta-prompting investment make sense versus just running good interactive conversations with AI?
- How should coaches encode their unique methodology into a co-pilot system without losing the flexibility to handle novel client situations?
- What is the right way to version-control meta prompts as models and audiences evolve?
- How does context engineering change when you introduce real-time web search versus static knowledge base retrieval?
Suggested Follow-Through
- Build your first meta prompt using the five-step protocol from the insight page — start with your single most-repeated AI task
- Run the 10-minute article process on your next piece of thought leadership content — document how the output compares to your standard prompting approach
- Try the expert debate simulation technique on a topic where you need to develop a clear position
- Review your prompts from the past month: which ones are “carrying the bucket” that could become pipelines?
- Begin mapping your coaching methodology as a co-pilot reasoning chain: what are the 3-5 steps you always take when a client brings you a specific type of problem?
- Set explicit Q3 and Q4 business goals with monthly milestones — take Lou’s year-end check-in seriously
Additional Resources
Links & Tools Shared in Chat
- PromptLayer — prompt versioning and blueprint management platform — https://docs.promptlayer.com/running-requests/prompt-blueprints (shared by Bally Binning; “looks comprehensive” — Don Back)
- Zentube Affirmations app (Firebase-deployed demo) — https://studio—zentube-affirmations.us-central1.hosted.app/ (shared by Donald Kihenja as a live example of a Firebase Studio deployment)
- Netlify — deployment platform for web apps (mentioned by Donald Kihenja as an alternative to Firebase for hosting)
Ideas from Chat
- Don Back proposed an AI-powered behavioral interview coach: AI poses situational questions, client speaks response, AI evaluates answer quality and guides improvement — see Insight - Use AI to Simulate Behavioral Interviews Before They Happen
- “Meta AI Cognition” — Bally Binning’s phrase for the meta-prompting concept; worth adopting as a teaching label for non-technical audiences
- Don Back: “This is a system to get beyond the general passive pap that we tend to get out of AIs and that is now polluting the content platforms” — a quotable framing of why process > prompts matters
- Donald Kihenja: “The next level is a generator that takes only one prompt: ‘Please read my mind and go from there’” — a memorable way to describe the aspirational end-state of a well-trained personal AI context
- Don Back observed that the coaching co-pilot approach could generalize to employment interviews — evaluating role descriptions against candidate responses as a dynamic feedback loop
Derived Artifacts
- adaptive-tutor (Adaptive Tutor — competency-gated AI tutoring)
- prompt-abstraction-ladder (Prompt Abstraction Ladder — three levels of prompt abstraction)