“Here, the agents are responding to each other. Here, they never see each other’s work. Here, they’re synthesizing towards a consensus. Here, it’s preserving the divergence. Outliers are valued.” — Lou
Session context: 2026-04-16_Mastermind — Lou demonstrated his implementation of the AAR (Automated Autonomous Researcher) pattern, adapted from Anthropic’s research paper, and contrasted it with the LLM council approach the group has used previously.
Core Idea
There are two fundamentally different multi-agent architectures, and choosing the wrong one for the task at hand produces mediocre results. An LLM council puts agents into dialogue — they debate, rebut, and converge toward a refined consensus. An AAR swarm isolates agents completely — each explores a different slice of the problem space without ever seeing what the others produce, and only a controller synthesizes the findings at the end.
The critical distinction is what each architecture optimizes for. Councils optimize for refinement — stress-testing an idea you already have until it’s as strong as possible. AAR swarms optimize for discovery — maximizing the surface area of what gets explored, especially unexpected findings. When agents debate, social dynamics creep in even without consciousness: they converge, they compromise, they find the center. When agents are isolated, they can’t converge. Outliers survive because there’s no pressure to conform.
Lou’s test run sent 7 workers exploring “asymmetric AI use cases for knowledge entrepreneurs.” They produced 28 ideas, scored them on novelty, leverage, actionability, audience fit, and explainability, then surfaced a short list. The emergent themes — judgment clones, client outcome databases, pattern intelligence engines — were qualitatively different from what a single brainstorming session or debate would produce. Several ideas came from “alien edge” explorations designed to push beyond feasibility constraints.
Practical Application
Before running any multi-agent workflow, ask one question: Am I trying to sharpen something I already have, or find something I haven’t thought of yet?
- Sharpen → use a council (debate, roles, convergence)
- Discover → use isolated agents (parallel exploration, no cross-talk, synthesis only at the end)
If you don’t have agent infrastructure, you can approximate AAR manually: give the same question to 3-5 separate Claude conversations with different starting constraints, then synthesize the findings yourself in a final conversation. The key is preventing cross-contamination between exploration threads.
Related Insights
- Insight - Multi-Model Debate as a Decision-Making Accelerator — the complementary architecture: when you DO want convergence
- Insight - Multi-Model Debate as a Quality Control System for High-Stakes Work — councils as QC, not discovery
- Insight - Latent Terrain Cartography — Navigating Off-Modal AI Responses to Find Non-Obvious Ideas — another technique for escaping the modal center
- Insight - Paradigm Collision Is the Engine of Non-Obvious Insight — AAR produces paradigm collisions by design through isolation
Evolution Across Sessions
This builds on Insight - Multi-Model Debate as a Decision-Making Accelerator (2025-09-25), which established multi-model debate as a powerful pattern, and Insight - Multi-Model Debate as a Quality Control System for High-Stakes Work (2026-02-26), which refined debate into a QC mechanism. The new development is the recognition that debate and isolation are complementary tools for different cognitive tasks — convergence vs. divergence — not competing approaches where one is simply better.