Topic
Why combining multiple AI outputs through summarisation destroys the most valuable signal — and the one-instruction fix that produces richer synthesis than any single model can generate.
Target Reader
A knowledge entrepreneur using AI to synthesise research, combine multiple drafts, or run multi-model deliberation — who finds that synthesised outputs often feel less precise and insightful than the individual inputs they were built from.
The Fear / Frustration / Want / Aspiration
“I use AI to combine and synthesise information, but the results often feel blanded-out — like the good stuff got lost in the process. I’m getting polished summaries when I wanted richer thinking.”
Before State
The reader uses a synthesiser model to combine multiple AI outputs, drafts, or research passes. They instinctively ask for a summary or a combined version. The output is coherent and well-structured — but noticeably shallower than the individual inputs. Unique angles, edge cases, and distinctive insights have been averaged away in the name of clarity.
After State
The reader understands why summarising destroys signal — and knows the correct alternative. They prompt for unique contributions, not common ground. Their synthesised outputs are supersets of the best individual contributions, not distillations that lose signal. Their multi-model and multi-pass work consistently produces outputs richer than any single model could generate.
Narrative Arc
You asked four AI models the same question and got four different answers. The natural next move was to ask a fifth model to “summarise and combine” — but the result was worse than any of the four. The tension: summarisation finds consensus, which means it finds the least distinctive signal. The turn: the correct synthesis operation is the opposite of summarising — find what each source said that the others didn’t, and add only that. The resolution: a one-sentence prompt instruction that changes synthesis from a signal-destroying operation to a signal-amplifying one.
Core Argument
The instinct to summarise multiple AI outputs is the primary cause of shallow synthesis — and replacing it with “find and add only unique contributions” produces consistently richer outputs.
Key Evidence / Examples
- Kasimir Hedstrom’s live multi-model experiment: Claude, ChatGPT, and Gemini gave structurally different responses to the same ICP question; a synthesised summary would have blended away the distinctive contributions from each
- The cognitive fingerprint finding: each model has structural biases (ChatGPT = relational, Gemini = mathematical, Claude = nuanced synthesis) that produce genuinely different outputs from the same prompt — these differences are the value
- Insight - The Golden Nugget Synthesis Rule — Only Add Never Omit When AI Synthesizes — the formal articulation of the rule and the corrective prompt
Proposed Structure (5–7 beats)
- The moment of disappointment: you combined four rich AI responses and got a flat summary
- Why this happens: the synthesis instruction “combine and summarise” finds common ground, which is the least distinctive signal
- The structural insight: each model has different cognitive biases — these differences are not noise, they’re the value
- The golden nugget rule: only add, never omit; the synthesis is a superset of unique contributions
- The prompt instruction: ask for unique contributions per source, not a combined summary
- Practical test: compare a summarised version to a golden-nugget version on the same input; the difference is visible immediately
- Broader application: this principle applies to multi-draft synthesis, multi-pass research, and any context where you’re combining AI outputs
Related Insights
- Insight - Run Your Prompt Through Multiple Models and Synthesize at the Top
- Insight - Multi-Model Debate as a Quality Control System for High-Stakes Work
- Insight - Design AI Systems for Maximum Composability and Minimum Context Pollution
Editorial Notes
This is a short, punchy article — the mechanism is simple once explained, and the practical test (compare summarised vs golden-nugget) is immediately actionable. Avoid the trap of making it too technical. The target reader is not building multi-agent systems; they’re combining AI outputs manually and getting worse results than they expected. Keep the framing practical and the instruction immediate. Differentiate clearly from “Use Two AI Models to Catch What One Model Misses” — that brief is about quality control critique; this is about synthesis methodology.
Next Step
- Approved for drafting
- Needs revision
- Deprioritised