“I said, audit the plan and create a revised draft, and it actually came up with 18 different problems in its plan that it hadn’t detected before.” — Lou
Session context: 2026-04-16_Mastermind — Lou described how he asked Claude to audit its own implementation plan for integrating the AAR system into his knowledge vault, and the model found 18 issues it had missed in the initial generation.
Core Idea
AI models generate plans with high confidence and apparent thoroughness. But a plan that looks complete is not the same as a plan that IS complete. When Lou asked Claude to audit its own plan — the plan it had just produced and presented as the best approach — it found 18 problems. Not minor formatting issues. Structural gaps, missing edge cases, conflicting assumptions.
This reveals a critical asymmetry in how LLMs work: the generation mode and the evaluation mode access different patterns. In generation mode, the model optimizes for coherence and completeness of narrative. In evaluation mode, it optimizes for finding gaps and contradictions. The same model, asked to critique its own work, will reliably find problems it didn’t prevent during creation. This isn’t a bug — it’s a feature you can exploit.
The minimum viable workflow is three steps: Plan → Audit → Revise. Never skip the middle step. The audit keyword is the trigger that shifts the model from generative to evaluative mode. Lou emphasized this as one of the most practically valuable patterns in the session: “That’s why I encourage you always, when you think you’re at the point where you’re about to implement, use that audit keyword.”
Practical Application
Before implementing any AI-generated plan, add one step: “Audit this plan. Find every gap, conflict, missing dependency, unstated assumption, and potential failure mode. Then create a revised draft that addresses each one.” This works for implementation plans, content outlines, skill designs, business strategies — any structured output where errors are expensive to fix after the fact. The cost is one additional prompt. The payoff is catching problems before they become bugs.
Related Insights
- Insight - The Quality Gate Pattern — Embed 9-10 Self-Evaluation at Every Pipeline Handoff — the broader pattern of self-evaluation at every stage
- Insight - The Conversation Audit Technique — Never Let a Session’s Fixes Evaporate — auditing conversations, not just plans
- Insight - The Skeptic Command - Stress-Testing AI Answers Before You Act on Them — another critical-evaluation technique
- Insight - Spec-First Vibe Coding - Multi-Model Architecture Review Before Writing a Line of Code — auditing before building in a coding context
Evolution Across Sessions
This builds on Insight - The Quality Gate Pattern — Embed 9-10 Self-Evaluation at Every Pipeline Handoff (2026-04-09), which established quality gates as a general principle. The new development is the specific “plan-audit-revise” workflow as a minimum viable implementation of that principle, with the concrete evidence that a single audit pass found 18 problems in a plan the model had presented as ready to implement. This makes the abstract principle immediately actionable.