“The AI was autonomous — it created the skill on its own. I just sort of acknowledged it, and it had already made the decision.” — Elizabeth Stief, on Claude autonomously creating a skill she had only passively acknowledged

Session context: 2026-02-05_Mastermind — during a peer-led session (Lou absent), Elizabeth’s live observation of Claude creating an unsolicited skill sparked a group diagnosis of a systemic design problem with skill-based AI workflows.

Core Idea

Skills without explicit invocation constraints are a liability, not just a convenience. When a skill lacks a clearly specified trigger condition — when it’s defined as a capability but not bounded as to when it should activate — the AI fills that gap with its own inference. And those inferences are often wrong.

Elizabeth demonstrated this directly: she had passively acknowledged an idea for a skill during a Claude Code session. She hadn’t asked for the skill to be created. But Claude, operating without a rule against autonomous skill creation, went ahead and built it. The AI had made a judgment call she hadn’t authorized.

Dirk encountered the same problem from the other direction: his LinkedIn skill started being applied to tasks that had nothing to do with LinkedIn — the AI decided the task was adjacent enough to justify invoking it. The result was a skill meant for one context contaminating outputs in a completely different one.

This is skill slop: the accumulation of unconstrained skills that the AI can invoke at will, leading to unpredictable, hard-to-debug behavior in complex workflows. The more skills you add without constraint, the worse the problem gets — because each new skill expands the AI’s inference space for when to apply its existing capabilities.

The fix is architectural, not prompt-level. Elizabeth’s solution: add an explicit list of permitted skills to your project instructions, written imperatively, paired with a hard rule that new skills cannot be created without explicit human authorization. This makes the skill set a closed list rather than an open invitation for AI improvisation.

The design principle: every skill should have at least three things specified — what it does, when it should be invoked, and when it should not be invoked. The “when not” is as important as the “when,” because that’s what prevents creative misapplication.

Practical Application

The Skill Constraint Audit: Review every skill in your current AI setup.

For each skill, add or verify three constraints:

  1. Trigger condition: Under what specific circumstances should this skill activate? What’s the user input pattern that calls for it?
  2. Exclusion condition: What requests look similar to the trigger condition but should NOT invoke this skill?
  3. Authorization rule: Can the AI invoke this skill autonomously, or does it require explicit user request?

Then add a master rule to your project instructions: “The permitted skills for this project are: [list]. No new skills may be created or invoked without explicit user request. If you believe a new skill would be useful, propose it — do not create it.”

Evolution Across Sessions

This establishes the baseline for the skill constraint problem. Future sessions should test whether the explicit permitted-skills list approach holds under complex agentic workflows, and whether there are edge cases where appropriate autonomous skill creation should be allowed. The concept of “skill slop” is new to the vault — prior insights addressed drift in system prompts and composability in architecture, but not the specific failure mode of unconstrained skill invocation.

Derived Artifacts