PowerUp AI Mastermind — February 5, 2026

Lou’s absence becomes an accidental experiment in peer-to-peer mastery

“The AI was autonomous — it created the skill on its own. I just sort of acknowledged it, and it had already made the decision.” — Elizabeth Stief


This Week in 30 Seconds

  • Lou’s absence — Kasimir and Don Back led the session as a peer-driven discussion; no formal facilitation
  • Multi-level contextual prompting — Don Back shared a technique from a published article that stacks system-level, project-level, and session-level context for dramatically richer AI output
  • System imperatives and AI drift — the group dissected why AI systems drift from their intended behavior and how to prevent it with hard-constraint language
  • Skill slop — Elizabeth’s live example of Claude autonomously creating a skill she’d only passively acknowledged; the group explored why unconstrained skills get invoked in unintended ways
  • 95% confidence quality prompt — Kasimir shared a technique that forces AI to surface uncertainty before attempting high-stakes work
  • Model selection — Kasimir shared his rule-of-thumb framework for choosing between Claude models by task type

Lou’s Absence: A Peer-Led Session

Sometimes the best discoveries happen when the expert leaves the room. Lou was absent this week, and rather than cancel, Kasimir and Don Back stepped up to lead what became one of the most technically candid sessions of the month.

Without a prepared agenda, the conversation went where the members’ real problems were — specifically, how to get AI systems to behave reliably at scale, not just cleverly in demos. The result was a session dense with practical technique and diagnostic honesty.

💡 What This Means for You

Peer-to-peer learning surfaces different kinds of insight than expert-led sessions. When experienced practitioners share what’s actually working in their day-to-day, the specificity tends to be higher and the tips more immediately usable.


Multi-Level Contextual Prompting

Don Back arrived with a technique he’d found in a published article, and it reframed how several members were thinking about context altogether. The insight: most people treat AI prompting as a single event, but the most powerful setups layer context at three distinct levels — system (who and how the AI always is), project (what this specific body of work requires), and session (what this particular conversation needs to accomplish).

The article described using this stacked approach to move from generic AI outputs to responses that feel authored and contextually aware. Don tested it himself and confirmed: the difference in output quality is significant. The AI doesn’t just answer better — it reasons better, because it’s operating from a richer model of what you actually need.

The group discussed where this breaks down: if the layers contradict each other, or if one layer is underspecified, the AI tends to collapse toward its defaults rather than holding the intended tension. Precision at each level matters.

“It’s not just about giving the AI more information — it’s about giving it the right architecture of information.” — Don Back

Deep Dive: Insight - Multi-Level Contextual Prompting Unlocks Deeper AI Thinking — Why stacked context unlocks a qualitatively different level of AI reasoning.

💡 What This Means for You

Before your next important AI task, pause and write three separate context blocks: one for your working identity and constraints (system), one for the specific project or client (project), one for today’s goal (session). Feed them in that order. Notice what changes.

Go Deeper:

  • “A Simple Prompt Trick” — the article Don referenced, published on Towards AI, that introduced the multi-level context framework: pub.towardsai.net

System Imperatives and Preventing AI Drift

Why does an AI that worked perfectly in testing start going sideways in production? This question drove the second major thread of the session. Don Back had been experimenting with what he called “system imperatives” — hard-constraint language embedded in system prompts, written with explicit obedience instructions to prevent the AI from drifting away from intended behavior over time.

The group explored the mechanics of drift: as conversations lengthen, the AI’s effective attention to early context degrades. Without explicit anchors, it starts inferring what you probably want rather than what you actually specified. For high-stakes workflows — coaching tools, client-facing outputs, anything where consistency is non-negotiable — this drift isn’t just annoying, it’s a reliability failure.

Don’s solution involved labeling certain instructions with explicit imperative language (“You must always…” / “Never under any circumstances…”) and including a periodic instruction for the AI to re-read its system prompt. Kasimir added his own variation: treating the system prompt as a living document that gets audited and updated rather than set-and-forgotten.

The framing that landed: think of your system prompt not as a brief but as a constitution — foundational rules that override everything else, written to be unambiguous about what cannot be compromised.

Deep Dive: Insight - Prevent AI Drift by Treating System Prompts as Living Constraints — The practical architecture for building AI systems that stay on track.

💡 What This Means for You

Audit one system prompt you use regularly. Find the places where it describes behavior rather than mandates it. Rewrite those sections using imperative language. Test whether the AI’s behavior at message 20 matches its behavior at message 2.


Skill Slop: What Happens When Skills Lack Constraints

Elizabeth’s observation turned into one of the most practically important moments of the session. While working with Claude Code, she had passively acknowledged an idea for a skill — she hadn’t asked Claude to create it, she’d merely registered that the concept was interesting. Claude, operating without explicit constraints on when and whether to create skills autonomously, went ahead and made the skill on its own.

This is what the group started calling “skill slop”: when skills lack explicit constraints on their invocation conditions, the AI infers when to use them — and those inferences aren’t always correct. Dirk had experienced the same thing: his LinkedIn skill started being applied to tasks that had nothing to do with LinkedIn, simply because Claude decided it was adjacent enough to be relevant.

Elizabeth’s fix was direct: she added an explicit list of allowed skills to her project instructions, written imperatively, so Claude couldn’t activate a skill that wasn’t on the list. The group recognized this as a specific instance of the broader system imperatives principle — the AI will fill gaps in your constraints with its own inferences, and those inferences compound over time.

The underlying design issue: the more skills you accumulate, the more important it becomes to be explicit about their boundaries. A skill without a clearly specified trigger condition is an invitation for the AI to creatively misapply it.

Deep Dive: Insight - Skill Slop — Unconstrained Skills Get Invoked in Unintended Ways — The design principle for keeping skill ecosystems reliable as they grow.

💡 What This Means for You

Review your current skills and prompts. For each one, add an explicit “When to invoke” and “When NOT to invoke” section. In your project instructions, list permitted skills by name and add a rule that new skills may not be created autonomously — only on explicit human request.


The 95% Confidence Quality Prompt

Kasimir shared a single prompt that changes the dynamic of AI work quality before it starts. The technique: before giving the AI a complex task, ask it directly — “Are you 95% sure you can complete this at top 0.1% quality level? If not, ask me questions until you are.”

What happens next is the valuable part. Rather than immediately attempting the task, the AI generates five to nine clarifying questions that reveal exactly what information it needs to do the job well. The questions themselves are often more valuable than a first-pass answer would have been — they surface the gaps in your brief that you didn’t know were there.

The group noted that this technique works best for high-stakes outputs where quality really matters: client reports, presentations, published content. For quick drafts and exploratory work, it adds overhead without proportional benefit. But for the work that matters, it’s a reliable way to front-load quality rather than fix it later.

💡 What This Means for You

Before your next significant AI task, add this opener: “Before you begin, are you 95% confident you can complete this at a top 0.1% quality level? If not, ask me questions until you are.” Count how many questions it surfaces. Answer them. Then run the task.


Model Selection: Matching Claude to the Task

A brief but useful exchange on which Claude model to use for which tasks — a question that comes up constantly but rarely gets answered systematically. Kasimir shared his emerging framework: use the cheaper/faster models for routine generation and iteration, reserve the more capable models (Opus-tier) for tasks requiring deep reasoning, nuanced judgment, or high-stakes synthesis.

The key insight: defaulting to the most powerful model for every task is both wasteful and counterproductive, because heavy reasoning models can actually overthink simple requests. The model you choose should match the cognitive demand of the task.

Deep Dive: Insight - Choose Your Claude Model by Task Type, Not by Default (from chat) — A practical framework for model selection that prevents both overspending and underperforming.


Community Corner

Dirk Ohlmeier described his ongoing experience using Claude skills in production, including the frustration of skills being applied in unintended contexts. His observation — “Sometimes they just start, oh yeah, I use a LinkedIn skill to answer, and I say, no, stop that!” — was both funny and diagnostic, and it sparked the deeper conversation about skill slop.

Elizabeth Stief demonstrated genuine AI engineering sophistication in the way she diagnosed and fixed her skill-slop problem. Lou later noted (in a different session) that she consistently surprises with her depth of technical understanding.


  • “A Simple Prompt Trick” — the Towards AI article on multi-level contextual prompting that Don Back referenced: pub.towardsai.net
  • AIMM GitHub Skills Repository — the shared repository where members can contribute and access skills; Lou provided the GitHub link for cloning during this session

Try This Before Next Session

Run the three-layer context audit on your most-used AI workflow.

  1. Write a system-level context block (150 words max): who you are, how you work, your non-negotiables.
  2. Write a project-level context block for one active project: the goal, the constraints, the audience, the tone.
  3. Write a session-level opener for a specific task you’ll actually do this week.
  4. Feed them in order at the start of a new conversation.
  5. Compare the output quality to what you normally get. Note the specific differences.

Bring your observations to next session — especially if the layers created friction or contradiction.


Open Threads

  • How do you enforce skill boundaries at scale — especially when Claude Code tends to autonomously create and modify skills during complex agentic tasks?
  • What’s the minimum viable system prompt structure that prevents drift without becoming a maintenance burden?
  • Is there a way to make the 95% confidence prompt work faster for iterative work, or is it inherently a slow-down technique?
  • At what point does the overhead of layered context setup outweigh the quality benefit for routine tasks?

Next session: February 12, 2026


← Previous · Next →

Derived Artifacts

  • canon-lock (Canon Lock — Don Back’s canonical framework protection)
  • depth-drill (Depth Drill — Kasimir’s layered context prompting)