“From a productivity point of view, the worst thing we can do is move back and forth.” — Lou

Session context: 2026-04-23_Mastermind — Lou opened the session with a frank read on Anthropic’s current compute crunch — capped API access, a new tokenizer quietly raising effective costs, capacity leased rather than owned — as a case study in how to think about AI platform disruptions without disrupting yourself.

Core Idea

The frontier models are leapfrogging each other constantly. OpenAI, Google, Anthropic, xAI — they trade leadership on benchmarks and capabilities month by month. For users, the temptation is to follow the current leader: switch when the other horse pulls ahead, switch back when yours catches up. This is exactly the wrong response, and it compounds in cost over time.

Switching carries a productivity tax that most people undercount:

  • Context loss — chats don’t transfer; knowledge built in one platform’s memory stays there
  • Prompt recalibration — habits, shortcuts, and system prompts built around one model’s quirks don’t port cleanly
  • Workflow rebuild — skills, connectors, integrations, and configurations need to be rebuilt in the new environment
  • Reorientation time — every platform has a learning curve that resets with every switch

When Anthropic shipped Claude Opus 4.7 with a new tokenizer that generates ~50% more tokens for equivalent output — without a price change or public announcement — the instinctive reaction was frustration and flight. Lou’s case: Opus 4.6 and Sonnet 4.6 still do everything this group needs. The new features worth adopting are the ones that are stable, economical, and widely enough adopted that the economics have been worked out. For tinkering and learning, try everything. For the workflow you depend on, wait for stability.

The distinction between tinkering and workflow is the key: the resolver pattern (Insight - Platform as Interface, Not Custodian — The Resolver Pattern for Portable AI Intelligence) de-risks platform dependency for your knowledge and files — which is where you should invest your architecture attention. For the AI interface itself, inertia is a feature, not a bug. The horse you’re on will catch up.

Practical Application

Maintain two tracks for new AI tools. Tinkering track: try everything, note the capabilities, spend time exploring. No commitment, no workflow change. Production track: high bar for adoption — must offer clear capability you can’t get elsewhere, must be stable enough that its breaking changes are rare, must have economics that work for the long run. When a new model or platform impresses you in the tinkering track, let it sit for 4–6 weeks before considering migration to the production track. By then, either you’ve forgotten about it (not a compelling enough delta) or the initial bugs have been ironed out.

Evolution Across Sessions

This establishes the baseline for platform-switching economics as a first-class concept in the vault. Prior sessions addressed model selection (Insight - The Model Underneath Is the Multiplier, Not the Interface) and tooling gaps (Insight - Tools Define AI Capability More Than Model Intelligence), but the switching cost of workflows was never articulated directly. The compute crunch context (Anthropic’s infrastructure constraints, the tokenizer change, the capacity leasing model) provides concrete grounding for why this principle matters now, not just in theory.