Original Insight

“Do the MVP first. And when you get that working, expand to include the Internet. But now you’re taking MVP plus plus version and it’s creating headaches.” — Kasimir Hedstrom

“When you open up the possibility of bringing in anything from the Internet as a search result, you open up the potential for hallucination.” — Lou

Expanded Synthesis

There is a seductive trap in building AI-powered tools: the desire to make them capable of everything before releasing them to anyone. Dirk’s experience building a voice chatbot on ElevenLabs perfectly illustrated what happens when ambitious scope collides with real-world usage — the bot confidently announced that Dirk earned €500k per year, a hallucination that was both embarrassing and entirely preventable.

The MVP First principle is not just a product development strategy. It is a cognitive discipline that high-performers must apply to any AI tool they build or deploy. The moment you grant a chatbot access to the open Internet, you have introduced an infinite variable. You no longer control the inputs, which means you cannot control the outputs. You have shifted from builder to bystander.

Why high-performers fall into this trap: Ambitious people build for the edge case. They think about the sophisticated CEO who might ask a niche industry question about semiconductor firms in Hamburg. So they give the bot Internet access “just in case.” What they actually create is a liability — a system that will confidently answer anything, correctly some of the time and catastrophically wrong the other times.

The psychological mechanism at play: When we’re building something, we’re optimizing for theoretical completeness. We want it to handle everything. But our users are optimizing for trust. They need it to be right about the things it says, not capable of speaking to everything. These are fundamentally different design philosophies, and only one of them actually serves the relationship.

The scope-control principle in practice: Start with a closed knowledge base. That means only the information you’ve personally vetted, organized, and uploaded. When users ask questions that aren’t in the knowledge base, that becomes a feature: the bot says “I don’t have that yet, but I’ll look into it,” and the conversation log gives you a roadmap of exactly what to add next. This approach turns user confusion into product intelligence.

On hallucination management: Lou’s three-part framework is worth committing to memory. First, tell the model to stay within its context and not fabricate. Second, require it to cite sources for anything it retrieves. Third, if you do allow Internet access, instruct it to explicitly say “a recent Internet search shows…” so users understand the provenance and weight of what they’re hearing. Transparent sourcing doesn’t weaken the bot — it builds the kind of trust that makes people keep coming back.

For coaches specifically: This principle applies directly to any AI coaching tool, FAQ bot, or client-facing assistant you might build. The temptation to make your bot a know-it-all is strong. Resist it. A coaching bot that says “that’s outside what I know, let’s schedule a conversation to dig into it” is more powerful than one that fabricates an answer. The former deepens the relationship; the latter destroys it.

The momentum dimension: There is something deeply valuable about shipping something that works, even if it only works for a narrow slice of what you eventually want it to do. Dirk’s bot amazed his first testers. That delight is real momentum. Feature creep before launch kills momentum. Ship the narrow thing that works, collect the delight, and build forward from there.

Practical Application for PowerUp Clients

The Chatbot Scope Audit (use before building or launching any AI tool):

  1. List every topic your AI tool might be asked about
  2. Mark each one: OWNED (you have the source material) or OPEN (requires Internet or inference)
  3. For your MVP, delete everything in the OPEN column — build only on OWNED content
  4. Define a response for OPEN questions: “I don’t have that yet. Here’s what I do know…”
  5. Set a review date: after 30 days, look at the questions the bot couldn’t answer — those become your expansion priorities

Journal prompts for coaches building AI tools:

  • Where am I building for the edge case instead of the core case?
  • If my tool was wrong about something, what would the cost be to my client relationship?
  • What is the minimal version that would still create real value and real delight?

Coaching question to use with clients who are overcomplicating their offers or tools: “What’s the one thing this needs to do perfectly before you add anything else?”

Additional Resources

Evolution Across Sessions

This is a foundational principle appearing for the first time in the July mastermind series. It establishes a baseline for all subsequent AI tool discussions: build narrow, build vetted, build trustworthy. Future sessions will likely revisit this as members begin deploying their tools to real clients.

Next Actions

  • For me (Lou): Create a simple pre-launch checklist template for AI tools covering knowledge base hygiene, hallucination guardrails, and scope discipline — share with mastermind group
  • For clients: Before building any client-facing AI tool, complete the Chatbot Scope Audit above; don’t skip the step of defining what the bot says when it doesn’t know something