“I don’t think it’s because of the intelligence, I think it’s because of the tools.” — Lou
Session context: 2026-04-16_Mastermind — Dirk expressed frustration that Claude couldn’t perform a seemingly simple task: generating domain name ideas and checking their availability. Lou diagnosed the problem as a tooling gap, not an intelligence gap — Fire Crawl enabled the task in Co-work; without it, Claude Chat hallucinated availability.
Core Idea
When AI fails at a task, the instinctive reaction is “the model is stupid.” But more often than not, the model is capable — it just lacks the tools to execute. Dirk asked Claude Chat to check domain name availability. Claude Chat has no web scraping capability, no domain registrar API access, and limited web search. So it did what LLMs do when they can’t verify: it confabulated. It generated plausible-sounding results that turned out to be wrong.
The same task in Claude Co-work, with Fire Crawl enabled, worked for 5 of 6 domains on the first attempt — complete with real availability status and pricing from Namecheap. The model’s “intelligence” was identical. The tool access was different. That single variable turned a frustrating failure into a useful result.
This has broader implications for how knowledge entrepreneurs should diagnose AI problems. Before blaming the model, check three things: (1) Does it have the tools it needs? (2) Are those tools enabled in this interface? (3) Has it been told the tools exist? Donald Kihenja added another diagnostic: if you’ve created skills mid-conversation, you may need to restart the session for Claude to load them. Many “AI is broken” experiences are actually configuration problems.
Practical Application
When Claude fails at a task that seems like it should be straightforward, ask it: “What tools, connectors, MCPs, scripts, or APIs would you need to do this properly? Make a plan for how we’d set that up.” This diagnostic prompt shifts from complaining about the failure to solving the root cause. Often the answer is a single MCP server or API key away.
Related Insights
- Insight - Choose Your Abstraction Layer Before You Build — choosing the right tool layer matters
- Insight - Extend Claude With Skills to Build Your Personal AI Ecosystem — building the tool ecosystem
- Insight - The Model Underneath Is the Multiplier, Not the Interface — the complementary insight: the model matters, but only after the tools are in place
Evolution Across Sessions
This establishes the baseline for a diagnostic framework: when AI fails, check tools before blaming intelligence. Prior sessions discussed model selection (Insight - The Model Underneath Is the Multiplier, Not the Interface) and skill ecosystems (Insight - Extend Claude With Skills to Build Your Personal AI Ecosystem), but the tooling-as-root-cause diagnostic was implicit. Dirk’s frustration and Lou’s live debugging made it explicit and testable.