Original Insight

“I’m thinking I could build something to help clients prepare for behaviour based interviews. Pose the situational question, have the client speak the response into the AI, then evaluate the answer AND give guidance on improvement.” — Don Back

Expanded Synthesis

Don’s idea emerged during a session demo showing an AI system that could dynamically adapt questioning based on the quality and direction of the user’s responses. The application he immediately saw was preparation for behavior-based interviews — but the insight generalizes to any high-stakes conversation a coaching client needs to rehearse before it happens.

The core architecture is simple and powerful: an AI poses a situational question (e.g., “Tell me about a time you handled a conflict with a direct report”), the client speaks or types their response, and the AI evaluates two things simultaneously — the quality of the answer and how to improve it. The evaluation layer is what separates this from a simple prompt template. It means the system is not just asking questions; it is actively teaching response quality in real time.

What makes this particularly compelling is that behavioral interview coaching has historically been expensive and logistically difficult to deliver. Good interview coaches are rare, and the repetition required to develop fluency — running through 20 or 30 scenarios until a STAR-format answer feels natural — is tedious work for a human coach to sit through session after session. AI handles repetition without fatigue, maintains consistent evaluation criteria, and can scale to any number of practice sessions at negligible cost.

Don noted that the key prerequisite is a solid job role description — the system only evaluates answers accurately if it has a clear standard to evaluate against. This is actually a useful coaching observation independent of the AI application: most behavioral interview failures happen not because candidates lack the experience, but because the evaluation criteria were never made explicit. The AI forces that clarity into the design of the system.

The pattern is not limited to interviews. Any scenario where a client needs to rehearse a high-stakes conversation — a difficult client negotiation, a salary ask, a pitch to a board, a coaching conversation with a direct report — follows the same structure. Situational prompt → client response → AI evaluation → guided improvement. The coaching value is that clients can run through ten iterations before the real conversation, arriving with earned confidence rather than theoretical preparation.

This also represents a scalable revenue model for coaches. A system like this can be offered as an asynchronous self-study tool between sessions, increasing touch-point frequency without increasing the coach’s time. The coach designs the evaluation framework once — encoding the criteria for what a strong answer looks like in their domain — and the AI delivers that methodology repeatedly at scale.

Practical Application for PowerUp Clients

Build a behavioral practice system in three layers:

  1. Question bank: Compile 20-30 situational questions specific to your clients’ target roles or conversations.
  2. Evaluation rubric: Define what a strong, medium, and weak answer looks like for each question type (e.g., STAR format compliance, specificity, relevance of the example, confidence of delivery).
  3. System prompt: Instruct the AI to pose a question, evaluate the response against the rubric, and give one specific improvement to try before the next attempt.

Start with a simple two-step loop: AI asks → client responds → AI evaluates. Add voice input (Whisper transcription) once the text version is working to reduce client friction.

Coaching question: “Which of my clients is facing a high-stakes conversation in the next 30 days that they could rehearse with an AI practice partner right now?”

Additional Resources

Evolution Across Sessions

This builds on the theme of AI as a practice environment rather than just an information tool. Where earlier sessions focused on AI for content production, this applies AI to the rehearsal and skill-building side of coaching delivery.

Next Actions

  • For coaches: Identify one high-stakes conversation type your clients face repeatedly and prototype a 5-question practice system using a basic system prompt.
  • Test the evaluation rubric approach: ask AI to grade a sample answer on a scale of 1-5 and provide one specific improvement suggestion.