Original Insight

“I had this thing come up with ChatGPT. I’d locked it down on the Canon and the five controlling laws. I locked that in and I’m continuing to work on it. And I noticed more than a little bit of drift. It created two new laws that it was citing. And I went, whoa, what’s going on? I had to take it back and redo a body of work in between when the drift occurred and when I noticed it. So, anything that we can do to kind of lock in these hard constraints I think is gonna help us.” — Don Back

Expanded Synthesis

The moment Don described was recognizable to anyone who has had a long working relationship with an AI system: you’ve established something carefully, the AI agreed, you built on top of it, and then somewhere downstream you discover that the foundation shifted without warning. The AI had, in effect, invented two new laws that didn’t exist in the original framework and started citing them as if they’d always been there.

This is not a bug in the malicious sense. It is a predictable consequence of how these systems operate. As context windows grow longer, the recency and weight of earlier instructions can be diluted by the accumulated mass of subsequent conversation. The model is not lying to you; it is doing its best with a probability distribution that has been nudged, gradually, by everything that followed the original instruction. The longer the chain, the greater the drift risk.

Kasimir’s suggested counter-moves are worth unpacking in full. The first is framing the constraint as a “system imperative” — language that signals categorical priority rather than preference. “Obey these laws” carries different weight in the model’s processing than “please keep in mind.” The second is periodic re-reading of the system prompt: explicitly telling the model to re-read its foundational constraints at intervals, as a way of refreshing their weight in the current context. The third, sometimes the cleanest solution, is simply taking a new chat when drift is detected rather than trying to rehabilitate a corrupted session.

For coaches, this dynamic has a direct parallel in client work that makes it worth naming explicitly. Long-term coaching relationships develop their own drift patterns. A client establishes a clear goal in session one, and the early coaching work is sharply aligned with it. Six months later, the sessions have accumulated a weight of context — new issues, relationship dynamics, progress markers, setbacks — and both coach and client can find themselves working on something adjacent to the original goal, or even in tension with it, without either party noticing the shift. The goal has been diluted by accumulated conversation.

The same discipline that protects AI system prompts protects long-term coaching engagements: periodic re-reading of the foundational constraint. What is the client actually trying to accomplish? What were the non-negotiable principles they articulated at the start? What did they say they most wanted to stop doing? When did the work last explicitly return to that level?

Kasimir also introduced a related but distinct insight that deserves its own weight here: different AI models have baked-in system-level imperatives that users often cannot override. He described Gemini’s nearly unbreakable drive to compress and summarize, regardless of instructions to the contrary. The practical implication is that selecting a model is not just a capability decision — it is a constraint selection. You are choosing which embedded imperatives to work with. This is an underappreciated dimension of AI literacy, and one that becomes increasingly important as high-performers rely on AI for more consequential work.

The coaching application of this layer is subtle but powerful: match your AI tool to the task’s demands, just as you’d match a coaching modality to a client’s readiness stage. Using a model that compresses when you need depth is like using a directive coaching style with a client who needs to arrive at their own insights — the tool is fighting the goal.

For PowerUp clients who are building long-term AI-assisted content and knowledge systems, this insight points toward a practice of constraint documentation: writing down the foundational principles, the canonical frameworks, and the non-negotiable constraints at the start of any significant project, and building in explicit checkpoints where those constraints are re-validated. The work of building intellectual property on AI-assisted foundations is only as reliable as the constraints protecting that foundation.

Practical Application for PowerUp Clients

The Constraint Audit Protocol

For any long-running AI project (or coaching engagement), apply this structure:

  1. Document the Canonical Constraints at the Start. Write them in a separate file, explicitly framed as system-level imperatives: these are the things that cannot change, the laws that govern this work. Name them. Number them. Be specific enough that drift would be noticeable.

  2. Set Drift Check Intervals. For AI projects: every 3-5 substantial work sessions, re-read the constraints to the model explicitly. For coaching: once per quarter, return explicitly to the client’s original stated goals and principles.

  3. Name Drift When You See It. When the work has moved away from the original constraints, name it directly before trying to fix it. “I notice we’ve moved away from X. Let me re-establish that now.” In AI: start a new session with the constraints front-loaded. In coaching: acknowledge the drift without blame and use it as a rich diagnostic — what does the drift reveal about what changed?

  4. Match the Model to the Task. Before starting significant AI-assisted work, audit the model’s embedded imperatives. Does it compress when you need depth? Does it drift from constraints? Is it better suited for ideation or precision? Use that information in your workflow design.

Journal Prompts for Clients:

  • “What were the original constraints or goals of this project/engagement? Are we still honoring them?”
  • “Where has my thinking drifted from my stated principles in the last 90 days?”
  • “What would I have to stop doing for my foundational commitments to stay intact?”

Additional Resources

  • Thinking in Systems by Donella Meadows — understanding feedback loops and drift
  • Principles by Ray Dalio — the discipline of codifying foundational principles as living operating constraints
  • Insight - Trust Before Automation in High-Value Relationships — the related principle that high-value work requires human oversight of AI systems, not just delegation

Evolution Across Sessions

This insight connects directly to the GEO (Generative Engine Optimization) work Lou presented in February 12th — specifically the principle that the foundational data you give to the system determines the authority signal it produces. If that foundation drifts or is misrepresented, the resulting authority work compounds the error. Clean inputs with protected constraints produce clean, authoritative outputs.

Next Actions

  • For Lou: Build a constraint re-validation step into the Gears onboarding checklist, so alpha users explicitly confirm their canonical data hasn’t drifted before each major system update.
  • For clients: Introduce the concept of “canonical constraints” in session one of any new engagement — document the client’s foundational commitments in a way that can be re-read at any point in the relationship.