The Iteration Compressor
Map any creative or service delivery process as a cycle, identify the rinse-lather-repeat loops where time disappears, and determine exactly which loops AI can compress — so you go from blank page to shipped in days instead of weeks. From the Dec 5, 2025 AIMM session on Lou’s iteration compression thesis.
You will audit a professional process for compressible iteration loops. Not “how to use AI” in the abstract — a specific, surgical analysis of where time actually goes in a workflow and which of those time-sinks AI can collapse. The output is a compression map: which loops to target, which tools to use, and what the realistic time savings look like.
The mechanism: every knowledge work process follows the same hidden structure — research, ideate, draft, revise, test, refine, publish. Each stage contains loops: you research, hit a dead end, research again. You draft, hate it, redraft. You revise, get feedback, revise again. Most people try to “use AI” by replacing an entire stage. The real leverage is compressing the loops WITHIN each stage — eliminating the dead time between iterations so a 2-week cycle becomes a 2-day cycle without skipping any of the stages that make the output good.
MY PROCESS: $ARGUMENTS
If no process was provided above, ask me to describe my core professional workflow — the thing I do repeatedly that generates revenue or deliverables.
PROCESS TYPE: [CONTENT CREATION / SERVICE DELIVERY / PRODUCT DEVELOPMENT / CONSULTING / EDUCATION / “you decide” to have me infer from the description] COMPRESSION GOAL: [WHAT I WANT — faster turnaround, higher volume, more iteration cycles per project, freed-up time for high-judgment work. Say “you decide” to have me recommend based on the process]
If “you decide,” state the inference and proceed.
STEP 1 — CYCLE MAPPING: Decompose the process into its full cycle. For each stage:
- What actually happens (not the idealized version — the real version with the false starts and backtracking)
- How long it typically takes
- How many internal loops it contains (draft-revise-redraft counts as 3 loops)
- What percentage of time is judgment work (decisions only you can make) vs. grunt work (research, formatting, first drafts, data gathering, synthesis of known information)
Present the full cycle as a map with time estimates per stage and per loop.
STEP 2 — COMPRESSION AUDIT: For each loop in the cycle, classify it:
High compression (AI can handle 80%+ of the grunt work):
- Research loops (gathering, synthesizing, comparing known information)
- First-draft loops (generating initial versions from clear requirements)
- Formatting/structure loops (organizing existing content into templates)
- Feedback simulation loops (predicting objections, testing arguments)
Medium compression (AI can handle 40-60%, you still steer):
- Revision loops (improving drafts with your judgment on what “better” means)
- Ideation loops (generating options you then curate)
- Testing loops (AI can simulate but you validate)
Low compression (AI assists but doesn’t compress meaningfully):
- Judgment loops (strategic decisions, client-specific calibration)
- Relationship loops (conversations, negotiations, trust-building)
- Experience loops (pattern recognition from years of domain expertise)
For each: current time per loop, estimated compressed time, and the specific AI tool or approach that would do the compressing.
STEP 3 — COMPRESSION SEQUENCE: Don’t compress everything at once. Recommend a sequence:
- Start here: The single highest-ROI loop to compress first (biggest time savings, lowest learning curve, clearest quality baseline to measure against)
- Then this: The second target, building on what was learned from the first
- Then this: The third target
- Leave alone: Loops that aren’t worth compressing (the judgment is the value, or the time invested is where the quality comes from)
For each recommendation: the specific tool, the specific workflow change, and a realistic 1-week test to prove it works.
STEP 4 — COMPRESSED CYCLE PROJECTION: Redraw the full cycle map with compressed timelines:
- Original total cycle time vs. projected compressed cycle time
- Which stages get faster, which stay the same
- Where the freed time goes (more iterations per project? more projects? higher-judgment work?)
- The compound effect: if you compress by X%, and you do this Y times per month, what’s the annualized time savings?
STEP 5 — RISK CHECK: Identify where compression could degrade quality:
- Which loops currently contain hidden quality gates that would be lost if compressed?
- Where does “fast” create a false sense of “done” when more iteration would have caught something important?
- What’s the minimum viable loop count for each stage — below which quality drops noticeably?
STEP 6 — VERIFICATION:
- Am I mapping the ACTUAL process (with its mess and backtracking) or the idealized version? Compression only works on reality.
- Am I being honest about which loops contain genuine judgment vs. loops that feel like judgment but are actually pattern-matching AI can replicate?
- Are the time estimates realistic or optimistic? (Cut optimistic estimates by 40%.)
- Am I preserving the loops that actually produce quality, or am I compressing things that shouldn’t be compressed just because they can be?
Revise the compression map based on what doesn’t survive scrutiny.
Source
- 2025-12-05_Mastermind (Lou Dallo — iteration compression thesis)