Original Insight
“When you said I want you to act in this particular way — well, you’re removing a whole bunch of options for it to return. So the good news is hopefully you get answers that are more relevant, right? Because you’re narrowing the amount of space in which it’s looking. But you’re not getting the benefit of the generalized knowledge that it might be helpful in some situations.” — Lou
“I just mentioned it in a kind of angry way, that I say, you know what, just behave like an investor. And then it was switching really into that role. It was significantly better. Yeah, it was really like, okay, playing the roles — at the end of the day it’s a kind of theater where you put the roles like they should behave, and then they do the research.” — dirkohlmeier
“There are instructions, there’s goals, there’s context, there’s constraints, there’s examples — there’s all those different sections, and each one filters down the behavior a little bit more, a little bit more.” — Lou
Expanded Synthesis
Dirk’s discovery — that switching from “act as a Big Four analyst” to “act as an investor who hired Bain to research companies for their portfolio” produced dramatically better output — led to one of the most technically precise and practically useful discussions in the July series.
The core principle: prompt length and specificity are not about more being better. They are about intentional filtering of the model’s search space.
Understanding the latent space metaphor:
When you send a prompt to a large language model, you are not sending it to a blank slate. You are sending it into a vast pre-trained space containing patterns from essentially all human knowledge available at training time. The model’s job is to find, within that space, the patterns most relevant to your query and use them to generate a response.
A short, vague prompt — “tell me about this company” — gives the model almost no filtering constraints. It could pull from financial analysis, news coverage, employee reviews, industry commentary, product descriptions. Some of it will be relevant; much won’t be. The output may be broad and interesting but unfocused.
A role assignment — “you are an investor who has hired Bain Capital to research this company for a potential acquisition” — does something structurally different. It activates a specific cluster within the latent space: the patterns associated with investment-grade due diligence, financial risk assessment, CEO blind spot identification, competitive moat analysis. It narrows the search space dramatically, and what comes back is far more concentrated and relevant.
The prompt-length decision framework:
| Situation | Prompt Approach | Why |
|---|---|---|
| Exploring a new idea or domain | Short, open prompt | You want the model to bring perspectives you haven’t considered |
| Researching for a specific deliverable | Role assignment + context | Narrows latent space to the relevant expertise cluster |
| Building a customer-facing system | Full specification | Prevents the model from going rogue across all possible topics |
| Generating consistent reports or proposals | Long prompt with format examples | Ensures reproducibility; the model defaults to your format |
The sections of a complete prompt:
Lou named these explicitly and they deserve to be treated as a standard framework:
- Instructions — what to do
- Goals — what success looks like
- Context — relevant background information
- Constraints — what not to do, what to avoid
- Examples — show what the output should look and feel like
- Role — whose perspective to adopt
Each section adds a layer of filtering. A prompt with all six sections is maximally specific — appropriate for production systems. A prompt with only instructions is maximally open — appropriate for exploration. Choosing which sections to include is itself a strategic decision.
The “write a haiku” example that Lou used is memorable and precise: the moment you say “write a haiku,” you’ve already specified the output format — the model knows exactly what structural constraints apply. You didn’t need to explain haiku structure; the word itself activates that knowledge cluster. Compare to “write a poem” — now the model has full latitude across every possible poetic form. Same content request; radically different search space.
Practical implication for coaches:
When you’re building a coaching prompt — whether for your own productivity or for a client-facing tool — ask these two questions:
- Do I want the model to surprise me with perspectives I haven’t considered? (short, open)
- Do I want the model to reliably produce a specific kind of output? (long, specific)
These are fundamentally different objectives, and they require fundamentally different prompts.
Practical Application for PowerUp Clients
The Prompt Calibration Exercise:
Take any prompt you currently use regularly. Run it through these questions:
- What section is missing? Does it have a role? Context? Constraints? Examples?
- Is it open or closed? If open, is that intentional? If closed, is it closed enough?
- What is the model’s search space? Imagine the model searching for relevant patterns — are you helping it find the right cluster?
The Role Activation Protocol:
Before writing any complex prompt, choose the role carefully. Not “expert” or “analyst” — those are too broad. Instead: “You are a [specific type of expert, e.g., investment partner at a growth-stage fund] who [specific context, e.g., has seen 300 B2B SaaS companies fail and survive] and is now [specific task, e.g., reviewing this company’s go-to-market strategy for fatal flaws before a Series B investment].”
Every element of that role description activates a different cluster in the latent space and steers the output.
For building coaching tools:
If you want to build an AI coaching prompt that reliably delivers high-quality responses to client questions:
- Write role + goals + constraints as a fixed system prompt
- Leave instructions dynamic (based on the client’s specific question)
- Provide 2-3 examples of ideal responses
- This gives you reproducibility without rigidity
Journal prompts:
- When I’m not getting the AI output I want, am I being too open (insufficient role/context) or too closed (too many constraints that prevent good thinking)?
- What roles should I be assigning to AI when I’m doing my most important work?
- What output format do I want consistently? Have I encoded that into my prompts?
Additional Resources
- Insight - Codify Your Judgment Into Skills, Not Just Prompts
- Anthropic’s system prompt documentation — the 24-page system prompt Lou referenced as an example of production-level prompt engineering
- Insight - Build Tiny Tools That Remove Real Friction
Evolution Across Sessions
This insight builds on earlier conversations about prompting basics and anticipates the July 31 deep dive into meta-prompting and process-over-prompts. The latent space framing here is the conceptual foundation for understanding why meta-prompting works — you’re not just writing better prompts, you’re learning to navigate the model’s knowledge architecture deliberately.
Next Actions
- For me (Lou): Create a one-page Prompt Calibration Guide using the six sections framework and the open/closed decision matrix — use as a teaching resource
- For clients: Take any AI task you’ve been frustrated with and apply the six-section framework to diagnose what’s missing; specifically try adding a precise role assignment