The Depth Drill
Produce expert-level analysis by drilling beneath surface understanding to the mechanistic layer where non-obvious insight lives. Combines first-principles excavation, recursive depth drilling, and self-verified synthesis. From Kasimir’s layered context prompting framework (Feb 5, 2026).
You will produce expert-level analysis by systematically moving beneath consensus understanding to the mechanistic layer where genuine insight lives. Surface answers reflect what the internet has agreed on. Each “why” layer moves toward mechanism. Mechanism is where your default output rarely goes without structured pressure.
TOPIC: $ARGUMENTS
If no topic was provided above, ask me to describe it before proceeding.
CONTEXT: [WHAT I’M TRYING TO ACCOMPLISH — e.g., strategic decision, authoritative article, framework development, opportunity evaluation. Say “you decide” to have the AI infer from the topic]
If any parameter says “you decide,” infer, state your inference, and proceed.
STEP 0 — ROLE CALIBRATION: Based on the topic and context, identify 2-3 expert perspectives that create productive tension. Don’t default to generic expertise. Select perspectives where their disagreement will surface something none would see alone.
State the perspectives and why their specific tension serves this analysis.
STEP 1 — CONFIDENCE CHECK: Are you 95% confident you can produce top 0.1% quality on this topic? If not, ask me targeted questions — only questions you genuinely cannot answer yourself and that would make your analysis generic without the answer. Do not ask questions to appear thorough.
STEP 2 — FIRST PRINCIPLES: Identify 3-5 foundational principles governing this domain. Not best practices. Not common wisdom. The underlying mechanisms that must be true for ANY approach to work. Test: if violated, efforts fail regardless of execution quality.
For each:
- The principle (one sentence)
- What most practitioners get wrong about it (the common misunderstanding)
- A specific failure mode when violated (named scenario, not vague warning)
STEP 3 — DEPTH DRILL: For each principle, drill 3-4 layers by asking “why is this true?” and “what causes this?”
- Layer 1: The principle as commonly understood
- Layer 2: The mechanism underneath — WHY does it hold?
- Layer 3: Boundary conditions — when does the mechanism break, reverse, or not apply?
- Layer 4 (where it exists): Cross-domain parallel — what does this share structurally with principles in an unrelated field?
STOP when you hit genuine bedrock (physical, mathematical, or deeply empirical constraint) OR when further drilling produces abstraction without explanatory power. Don’t manufacture depth.
STEP 4 — SYNTHESIS: From the deep structure, identify:
- 2-3 non-obvious conclusions not accessible from surface analysis
- 1 genuine tension between principles that practitioners paper over
- 1 cross-domain structural parallel (Layer 4 connection)
NON-MODALITY CHECK: For each insight, ask: “Would a competent professional with 30 minutes and a good AI assistant arrive at this?” If yes, it’s not deep enough — find what that process misses. If you genuinely cannot go deeper on a point, say so. Honest depth-reporting beats forced novelty.
STEP 5 — APPLICATION: Translate insights into recommendations for my stated context. Each must:
- Trace to a specific finding (cite which principle or layer)
- Name what most people do instead (the common mistake this corrects)
- Carry a confidence level and what would change the assessment
Apply additional analytical lenses you detect as relevant — including but not limited to: second-order effects, implementation sequencing, risk factors specific to my context, and dependencies between recommendations.
STEP 6 — VERIFICATION: Flag honestly:
- Where did you substitute a well-known framework for genuine first-principles reasoning?
- Which “non-obvious insights” would be obvious to a domain expert?
- Which recommendations have confidence below 75%?
If verification reveals significant issues, revise before delivering.
Source
- 2026-02-05_Mastermind (Kasimir Hedstrom — layered context prompting framework)