Topic

Why feeding AI your own marketing copy produces generic outputs — and how feeding it raw voice-of-customer (VOC) material (testimonials, call transcripts, intake responses) produces content and schema that is richer, more citable, and more client-resonant than anything you could write about yourself.

Target Reader

A coach or consultant who has been using AI to create content — LinkedIn posts, website copy, email sequences — and has noticed the outputs feel polished but off. They are producing more content with AI, but it doesn’t land the way their most compelling case studies or testimonials do. They are not sure why, and they haven’t connected the problem to the quality of their inputs.

The Fear / Frustration / Want / Aspiration

“I’m using AI for content creation but everything comes out sounding the same — and a bit too professional. It sounds like me talking about my work, not like me actually solving my clients’ problems. I want my AI content to feel as real and specific as my best testimonials.”

Before State

The reader is loading AI with their bio, their services page, their LinkedIn summary — the self-authored, inside-out version of their expertise. The AI dutifully produces content in the register of that input: polished, framework-forward, professional. It has no access to the raw, specific, emotionally resonant language their clients actually use, because the reader has never given it any.

After State

The reader has a concrete understanding of why VOC material outperforms marketing copy as AI input, and a specific three-step sprint to collect and deploy it. They know what to underline in a testimonial, what to pull from a call transcript, and how to load it before any AI content session. Their next AI content session will produce noticeably more specific, more human-sounding output — because the inputs are more human.

Narrative Arc

The assumption: better prompts produce better outputs. The complication: better inputs produce better outputs than better prompts, and most coaches are giving AI the worst possible inputs — their own marketing language. The mechanism: marketing copy is optimized for conversion; it maps to professional categories. VOC material maps to the felt-experience layer — the language of distress, confusion, and hope that clients actually use when searching for help. That is also the language AI engines use to retrieve and cite content. The resolution: a simple collection sprint flips the input ratio and immediately improves AI output quality for both content creation and GEO schema building.

Core Argument

The single highest-leverage fix for generic AI output is not a better prompt — it is better source material. Raw client language, extracted from testimonials and call transcripts, carries more specificity, more emotional resonance, and more citability per sentence than anything a coach would write about themselves.

Key Evidence / Examples

  • Lou’s direct statement (2026-01-22): “The highest-value input is not the content you’ve written about yourself. It’s the language your clients used to describe their experience.”
  • The structural mismatch: “I didn’t know how to rebuild trust after the reorg and I was terrified I’d lose half my team” — not a marketing sentence, but precisely the query language an executive coach’s client would type to an AI
  • GEO citability mechanism: schema built from VOC maps to the felt-experience layer; AI engines retrieve at the experience level, not the professional category level
  • The ICH-First Protocol: load testimonials or your Ideal Client Handbook before any AI content or schema request — the output reflects richer client psychology because the model was grounded in their language first
  • The Schema Input Audit: if 80% of your schema material is self-authored professional copy, flip the ratio — VOC carries more citability per sentence

Proposed Structure (5–7 beats)

  1. The generic output problem — name the symptom: AI content that sounds professional but not human; polished but not specific
  2. The input diagnosis — the problem is not the prompt; it is the source material. Most coaches give AI only what they wrote about themselves
  3. What marketing copy sounds like vs. what clients actually say — a direct comparison of the two registers; the felt-experience language is what AI engines retrieve
  4. The VOC advantage at two levels — (1) content creation: richer inputs produce richer outputs; (2) GEO schema: VOC maps to the citability layer where AI engines actually search
  5. The collection sprint — three categories to gather before your next AI session: testimonial excerpts, call transcript moments, intake phrases you didn’t give them. What to underline in each
  6. The ICH-first protocol — load before you create; a two-minute habit that changes the register of every AI content session

Editorial Notes

Score: 4.4. Actionable and Useful are both 5 — the collection sprint is immediately deployable and the problem it solves (generic AI output) is live for most readers. Timely and Insightful are both 4 — the principle is not entirely counterintuitive (most marketers know testimonials are gold), but the specific AI-input application and the GEO citability angle are genuinely non-obvious.

Adjacent briefs: Brief - Write to the Symptom Not the Solution.md (4.6) and Brief - Your Clients Are Not Googling Your Solution They Are Googling Their Confusion.md (4.6) own the why of symptom-layer positioning. This brief owns the how of building AI inputs from VOC. Frame it as the practical complement: once you understand why symptom language matters (those briefs), this one tells you how to collect and deploy it.

Avoid: making this about “prompt engineering.” The point is source material quality, not prompt technique.

Next Step

  • Approved for drafting
  • Needs revision
  • Deprioritised