Original Insight

“Not consciousness, but functional metacognition — that is something that apparently only humans used to be able to do, which is to think about how we think. Now we can verify explanations against internal states. We can actually ask the models to report what concepts were actually active while they were processing.” — Lou (describing a Claude research paper on transformer circuits)

Expanded Synthesis

A research paper Lou encountered just before the October 30 session described something remarkable: when researchers injected concepts into an LLM’s neural activations and then asked the model what it was “thinking about,” the model could identify those injected concepts even before they influenced its output. In other words, the model demonstrated something functionally equivalent to metacognition — awareness of its own thought process — before the thought had fully formed.

Lou’s response was characteristically practical: not philosophical speculation about AI consciousness, but an immediate exploration of what this means for people who build with AI. And the implications are significant.

What metacognition in AI means for prompting: If a language model can access and report on its internal conceptual states, you can prompt it to do exactly that — and get dramatically more useful outputs as a result. Rather than asking for an answer, you ask for the reasoning behind the answer, which reasoning framework is most active, and where confidence is high versus uncertain. This shifts the model from oracle to transparent collaborator.

Lou demonstrated three practical prompting techniques derived from this research:

  1. Grounded Reasoning Prompt: “Before answering, identify the main conceptual frameworks you’re internally considering. Reason while noting which framework is most active at each step. If uncertainty increases and frameworks conflict, report that explicitly.” This produces answers that show their work — and flags the moments where the AI is guessing.

  2. Uncertainty Calibration Prompt: Rather than asking “how confident are you on a scale of 1–10?” (which produces arbitrary numbers), ask: “Report on your internal state. How stable are the conceptual representations you’re drawing on? Are patterns strong and consistent, or diffuse and uncertain?” The difference is between asking for a number and asking for genuine introspection.

  3. Manipulation Detection Prompt: Ask the model to flag when it detects conceptual injections that conflict with the core question — a way to audit whether a prompt is being inadvertently skewed by the way it’s framed.

Why this matters for coaches and high-performers: The highest-value use of AI for knowledge workers is not generating content — it’s augmenting the quality of reasoning. When you’re facing a genuinely complex decision — a business model shift, a difficult client situation, a strategic fork — you want the AI to be a rigorous thinking partner, not a yes-machine. Metacognitive prompting makes the AI’s reasoning process visible and auditable, which dramatically increases its value as a thinking partner.

For coaches specifically, there is a beautiful parallel here. The best coaching questions work precisely because they invite the client to become aware of their own thinking — to observe the thoughts before acting on them. “What do you notice when you consider that option?” “What assumption is driving that response?” “Where does your certainty come from here?” These are metacognitive questions. Prompting AI to report on its internal states is the same move, applied to a different kind of intelligence.

The blind spot: Metacognitive prompting asks more of both the AI and the user. It produces longer, more nuanced outputs that require reading rather than skimming. The temptation is to default to simpler prompts that give faster answers — but faster isn’t better when the stakes are high. The payoff from slowing down and asking the AI to show its reasoning is proportional to the importance of the decision you’re making. Reserve metacognitive prompting for high-stakes situations where you genuinely need to understand the confidence and the framework behind the answer.

There is also an important epistemological caution: the model reporting on its internal state is not the same as that report being accurate. The introspection is functional, not perfect. Treat it as a useful signal, not ground truth.

Practical Application for PowerUp Clients

The Reasoning Audit Protocol

For any significant decision, recommendation, or analysis you request from AI, add a metacognitive layer:

  1. Ask your question normally first.
  2. Then add: “Now, tell me which conceptual frameworks were most active when you generated that answer. Where are you confident, and where are you less so? What would make you change this recommendation?”
  3. Review the reasoning report. Flag any point where the AI says its confidence is diffuse or where frameworks conflicted.
  4. Investigate those flagged areas independently before acting on the recommendation.

For Coaching Use: Ask the AI to function as a thinking partner in a pre-session prep: “I’m about to coach a client on [challenge]. Help me think through the relevant frameworks, flag where conventional approaches might be misapplied, and tell me where your confidence in each recommendation is lower.”

Journal Prompt: Think of a recent decision you made by going with your gut. If you had to report on the “internal state” of your thinking — which frameworks were active, where your confidence was strong or weak — what would that report say? What would you do differently if you could have seen that report before deciding?

Additional Resources

Evolution Across Sessions

This insight represents Lou’s habit of moving from technical AI news to coaching-relevant application within a single session — taking a research finding, understanding it deeply, and immediately asking “what can we do with this?” The November sessions build on this by exploring how knowledge retrieval (RAG, Pinecone, Qdrant) can give AI access to rich contextual knowledge bases — extending the metacognitive capability with domain-specific grounding.

Next Actions

  • For me (Lou): Develop a “Reasoning Audit” prompt template for high-stakes coaching and business decisions. Share with the mastermind as a practical tool.
  • For clients: Run the Reasoning Audit Protocol on one important decision you’re currently facing. Compare the AI’s transparent reasoning to your own intuition.