Original Insight

“The process is more important than the prompts. Prompts are ephemeral — they age, get filtered, or lose effectiveness as models evolve. Process, however, is adaptive.” — Lou (synthesizing Sam Woods’ framing)

“If you want consistent results, build reproducibility, not one-off wins… You don’t start by creating the custom GPT, you start at the thought level, at the strategy level, at the systems level. And then have those two layers of prompts that actually create the ultimate prompt for you.” — Lou

“Our initial way to design prompts was to carry the bucket. We’re not there anymore. So we’re using the systems, the pipelines. But what we’re doing is we’re prompting at a level where we think about what’s a solution to the bucket problem and then come up with the pipeline.” — Lou

Expanded Synthesis

The July 31 session was the most architecturally ambitious of the July series. Lou introduced a framework that shifts the entire way we think about working with AI — from prompt writing to process design.

The problem with the prompt-first approach:

Most people think about AI productivity in terms of prompts: find a good one, use it, refine it, maybe share it. This works for simple, one-off tasks. But it breaks down when you need consistent, high-quality, branded output at scale. Why?

Because prompts are fragile. They are tied to specific model versions, specific contexts, specific moment-in-time configurations. A prompt that produced brilliant newsletter content last month might produce mediocre content today — because the model updated, or because your audience changed, or because you’re in a slightly different context. Every time you need output, you’re starting from a semi-blank slate. You’re carrying the bucket.

The three-level architecture:

Lou described a hierarchical system with three levels of abstraction:

Level 1 — The Task Agent The prompt that actually does the work: writes the article, answers the client question, generates the proposal. This is where most people currently operate. It is the most fragile level.

Level 2 — The Domain Orchestrator (Meta Prompt) A prompt that doesn’t do the task, but knows how to generate the task prompt dynamically. It has domain expertise (e.g., writing for knowledge entrepreneurs) and knows when to call which task agents. Crucially, it is context-aware — it pulls relevant information into the task prompt before executing.

Level 3 — The System Architect (Meta-Meta Prompt) A prompt that is domain-agnostic. It doesn’t know about article writing or coaching or legal research. It knows the structure of a perfect prompt system. It interviews the user, understands what they want to accomplish, and generates the Level 2 orchestrator for that domain.

Why this changes everything:

When you operate at Level 1, you must maintain each prompt manually. When the audience changes, when the model updates, when your product evolves — you rewrite the prompt. This is the “carry the bucket” approach.

When you operate at Level 3, you have a system that generates and regenerates itself based on current reality. The Level 3 prompt doesn’t know what the article is about; it knows how to create a system that will figure out what the article should be about, based on your audience’s current pain points (retrieved dynamically via search), your voice profile (in memory), and your goals.

Context engineering — the underlying mechanism:

What makes the Level 2 and Level 3 prompts powerful is context engineering: the deliberate construction of the information environment in which a prompt operates. This is not just “add context to your prompt.” It is designing a system that dynamically assembles the right context before the task executes.

In Lou’s live demonstration, he showed this by first running a simulated expert debate on the topic, then having a judge evaluate the debate, then summarizing the conclusions — all before asking for the final article. By the time the article request arrived, the prompt’s context contained a structured, multi-perspective analysis of the topic. The article that emerged addressed all the relevant angles, pre-empted objections, and took a defensible position. Ten minutes of context engineering produced content that would have taken hours of research and drafting to achieve manually.

The bucket-to-pipeline metaphor:

Lou used the water carrier story as an anchor: carrying a bucket serves the immediate need but scales terribly. The pipeline changes the question from “how do I carry this water?” to “what system ensures the water is always available?” The meta-prompting architecture is the pipeline. You invest once in building the system; thereafter, the system does the work.

The coaching application:

Lou demonstrated a specific coaching scenario: a client asks “I’m stuck on pricing — should I charge hourly or package-based?” In a task-level approach, you prompt the AI: “Help me think through pricing.” You get a generic answer. In the process-level approach, you have a co-pilot system that automatically (1) classifies the client’s business model, (2) identifies likely cognitive and emotional pricing blocks based on that model, (3) retrieves relevant case studies from your client archive, and (4) frames a tailored coaching question back to the client. The AI is not just answering the question — it is following your coaching methodology.

This is the difference between AI that assists you and AI that embodies your process.

Practical Application for PowerUp Clients

Building Your First Meta Prompt:

Start with a task you do repeatedly with AI (write a newsletter, create a client summary, draft a proposal, generate social posts).

Step 1: Identify your task-level prompt What prompt do you currently use for this task? Write it down.

Step 2: Generalize it Ask the AI: “How could I make this prompt dynamic? Instead of being specific to this one instance, how could it adapt to any version of this type of task?”

Step 3: Templatize Ask the AI: “Write a template version of this prompt that I could use for any [article / proposal / coaching response] without changing the core instructions.”

Step 4: Build the context layer Ask the AI: “What information would this prompt need to have in its context to produce the best possible output? List the data sources, the questions it should answer first, and the knowledge it should activate before executing.”

Step 5: Combine Merge the template and the context layer into a single meta prompt. Test it. Iterate.

The 10-Minute Article Process (Lou’s demonstrated workflow):

  1. State the topic and the thesis you want to argue
  2. Ask the AI to simulate a 10-minute expert debate from two opposing positions on this topic
  3. Ask the AI to analyze the debate and declare a winner with reasoning
  4. Ask the AI to write a 1,500-word introductory article for [your specific audience] based on the debate analysis, taking the winning position

This four-step sequence takes 10 minutes and produces an article grounded in the strongest arguments on both sides of the question. It cannot be gamed by AI-typical generality because you’ve forced it to engage with real intellectual tension before writing.

Journal prompts:

  • What tasks do I perform repeatedly with AI where I’m “carrying the bucket” — doing the same work every time because I haven’t built the pipeline?
  • What is my coaching process, stripped to its essential steps? Could an AI follow that process if it was encoded as a meta prompt?
  • What would it mean for my business if the AI consistently embodied my methodology rather than just responding to individual questions?

Additional Resources

Evolution Across Sessions

This is the capstone insight of the July series. It synthesizes and elevates everything that came before: the MVP/scope discipline (July 3), the outsourcing framework (July 10), the latent space and prompt calibration work (July 17), the memory and voice work (July 24) — all of these are building toward the ability to encode your expertise, your process, and your judgment into systems that operate consistently at scale. July 31 names that destination explicitly.

Next Actions

  • For me (Lou): Develop the meta-prompting framework into a structured workshop for the mastermind group; create the “bucket-to-pipeline” worksheet as a teaching resource
  • For clients: Identify your single most-used AI workflow; apply the five-step “Building Your First Meta Prompt” process to it; share the result with the group