Original Insight

“Anytime the patients called, there was somebody to talk to, basically. Yeah, they knew that it would be a DORA, they knew it was going to be an AI… 92% satisfaction… I think it not only frees up the person to do more important work… they don’t have to go through all that repetitive conversation — the frequently asked questions, or simple appointment juggling… And suddenly, you know, everybody wins.” — Lou, on the NHS DORA voice AI case study

Expanded Synthesis

Bally Binning brought a remarkable real-world case study to the November 27 session: a startup working with Oxford University and the NHS deployed a voice AI agent called DORA for post-cataract surgery follow-up care. The results were striking — patient wait times reduced from 35 weeks to 10 weeks, and patient satisfaction came in at 92%, even among an older demographic who fully knew they were talking to an AI.

This case study surfaced one of the clearest principles for where AI automation genuinely works — and the mechanism behind it deserves careful attention for coaches and consultants thinking about where AI can augment service delivery without eroding client trust.

The human fatigue problem. Don Back articulated it cleanly: there’s a high human reluctance to make repetitive outbound calls — especially when the caller has to deliver the same information, answer the same questions, and manage the same objections dozens of times a day. Lou added the patient-facing version: staff who’ve answered the same question 100 times that day don’t show up with a warm, helpful personality. The interaction degrades on both sides. The AI doesn’t get tired. It brings the same quality of attention to call 500 as it did to call 1.

Why the high-stakes demographic accepted it. The fact that older patients — not typically early adopters of AI technology — showed 92% satisfaction is particularly significant. The key conditions that made it work: transparency (they knew it was AI), competence (the linguists had refined the conversation quality over two years), and accessibility (there was always someone available to answer). The comparison isn’t “AI vs. a warm, attentive human” — it’s “AI vs. never getting through, or being told to call back later.” Against that baseline, AI wins easily.

The structural diagnosis. Don Back (drawing on his background as VP of Health Innovation) identified the pattern precisely: “The money is already committed. The money is already spent. But the throughput is not there.” This is true far beyond healthcare — it’s true in coaching practices, consulting firms, and service businesses of all kinds. Most service providers have invested in client acquisition and quality delivery, but the between-session administrative layer — scheduling, follow-up, FAQ resolution, check-ins — is poorly served. AI can take that layer without diminishing the high-value moments.

The win-win psychology. Don’s observation about implementation success is critical: “If you cause additional work for the humans, they will resist and they will kill it. But if you can make their job easier, and this is the psychology of creating a win-win.” This applies directly to coaching and consulting contexts. Any AI adoption that threatens the practitioner’s sense of professional identity or increases their administrative burden will be resisted or quietly abandoned. The ones that succeed do exactly what DORA did: take the repetitive, low-meaning interactions and handle them, freeing up the human for the interactions that actually require human presence.

The coaching practice application. For coaches and consultants in PowerUp’s client base, the equivalent of DORA is straightforward to imagine: a well-configured AI agent that handles scheduling, answers FAQs about the coaching process, provides session prep reminders, delivers between-session journaling prompts, and routes urgent questions to the appropriate channel. None of this touches the high-trust, high-stakes coaching conversation — it handles everything around that conversation.

The blind spot. Two risks are worth naming. First, the quality of the conversation matters enormously — the linguists working on DORA spent two years refining the interaction for latency, interruption handling, filler words, and tonality. A poorly configured AI that sounds robotic or unhelpful will damage trust more than no AI at all. Second, transparency is non-negotiable at this level of stakes. The 92% satisfaction was with people who knew they were talking to AI. Attempting to pass off AI as human in emotionally sensitive contexts is a trust-destroying strategy.

Practical Application for PowerUp Clients

The Repetition Audit (Framework)

Ask coaching or consulting clients to map their current client interactions across three categories:

  1. High-stakes, high-presence: These require full human attention. Discovery sessions, breakthrough conversations, challenging feedback, emotional processing. Never automate.
  2. Medium-stakes, medium-presence: Check-ins, progress reviews, accountability loops. AI can assist (reminders, prompts, summaries) but should support, not replace, the human.
  3. Low-stakes, high-repetition: Scheduling, FAQ responses, session prep reminders, resource delivery, administrative follow-up. This is the DORA zone. AI handles; human monitors.

The goal is to identify at least 3 specific interactions that currently consume practitioner time and could be handled by a well-configured AI agent without any diminishment of client experience.

The “win-win” framing: Any AI adoption proposal to a client or team member should be framed around what it gives them — time, energy, attention freed from repetitive tasks — not what it takes from them. The NHS case is the ideal story to tell: older patients who knew they were talking to AI were satisfied because the AI was always available, competent, and freed up the human staff for genuinely human work.

Coaching questions:

  • “What are the interactions in your practice that you find most draining — not because they’re emotionally difficult, but because they’re repetitive and mechanical?”
  • “If you had an always-available assistant who could handle your scheduling, FAQs, and session prep reminders, what would you do with the recovered time?”
  • “What would your clients need to know or feel to be comfortable interacting with an AI layer in your practice? What would make them trust it?”
  • “Where in your service delivery does the human version consistently underperform — not because the person isn’t good, but because they’re tired, or overloaded, or distracted?”

Additional Resources

Evolution Across Sessions

This insight from November 27 is the real-world case study that grounds the more abstract GEO and automation discussions. Where the GEO arc is about authority and visibility, the DORA case study is about service delivery — the other side of the knowledge entrepreneur’s business. Together they point to the same strategic conclusion: AI should take the mechanical, repetitive, high-friction work so that human expertise can concentrate at the moments where it genuinely matters.

Next Actions

  • For me (Lou): Research voice agent platforms suitable for coaching practice use cases. Consider building a demo for PowerUp clients showing a simple session-prep / FAQ voice interaction.
  • For clients: Add the Repetition Audit to the service delivery review portion of PowerUp coaching. Help clients identify their “DORA zone” and start designing the AI layer for it.