Original Insight
“A successful implementation, it was like, you need to kind of combine top-down with bottom-up. Top-down, you get the enterprise-level opportunities and you can harmonize databases, et cetera. But then the bottom of it is that there comes the new ideas that life in the trenches shows… You need to let the workforce play around with the AI also, if you want them to adapt it. It’s not enough to say, this is the AI, and here are the prompts, and this is what you’re going to use.” — Kasimir Hedstrom
Expanded Synthesis
One of the sharpest observations in the December 5 session came from Kasimir, building on a discussion about why leaders are privately using AI while their organizations aren’t. The structural problem he identified — and the solution he pointed toward — has broad implications for anyone coaching leaders or running teams.
The leadership paradox. Bally Binning opened the thread by noting something counterintuitive: leaders are using AI individually and feeling confident about it, but that confidence isn’t translating into organizational adoption. The observation she found most surprising was that leaders using tools personally don’t necessarily know how they’re using them at a deeper level — they know it’s useful, but can’t teach it. This gap between personal competence and instructional competence is critical, and it’s one coaches can directly address.
The top-down failure mode. When AI is deployed purely top-down — “here is the approved tool, here are the approved prompts, this is how you will use it” — you get what Kasimir described: low engagement, reluctant compliance, and no ownership. The workforce hasn’t been allowed to discover the value themselves. They’ve been handed a mandate, which tends to generate resistance proportional to how disruptive the change feels to their existing identity and workflow. Lou added an important structural layer: large organizations also face the data silo problem — different departments have different databases, formats, and systems. Top-down AI adoption without bottom-up buy-in also means trying to integrate all of that data before anyone is even asking for it.
The bottom-up failure mode. Pure bottom-up adoption — everyone plays with whatever tools they find — generates the opposite problem: inconsistency, security risks, wildly uneven capability development, and no strategic alignment. Individual contributors may discover genuinely useful applications in their domain (“life in the trenches”), but without a top-down framework to capture and propagate those discoveries, the value stays siloed.
The combinatorial solution. The insight Kasimir articulated is that the sequencing and combination matters enormously. Top-down provides the vision, the infrastructure integration, the governance framework, and the strategic use cases. Bottom-up provides the real discovery of where AI actually creates value in daily work — the applications that enterprise architects would never dream up from a conference room. The successful organizations will be the ones who do both: give people permission and space to experiment, while also investing in the infrastructure that lets those experiments scale.
The coaching application. For leaders in PowerUp’s orbit — entrepreneurs, executives, senior professionals — this plays out at a different scale. A coach running a team of 2–5, or a consultant working inside a client organization, is essentially a mini-enterprise. The same principle applies: you can’t mandate AI adoption in your clients’ workflow by handing them a prompt library. You have to create conditions for genuine discovery, then provide the infrastructure (frameworks, templates, workflows) that allows what they discover to compound.
The fear architecture. Lou identified the deepest structural resistance: workers who might be excellent candidates for AI adoption are also the ones most afraid of being replaced by it. This creates a perverse dynamic — the better they are at their current job, the more they have to lose, and the more resistant they become to tools that might make their current skills obsolete. Bally’s observation about leaders using AI privately (but not broadcasting it) points to the same fear operating at the leadership level: “I don’t know if I want to AI myself out of a job.” Coaching leaders through this fear architecture — separating the efficiency story from the replacement story — is one of the highest-value interventions in this space.
Practical Application for PowerUp Clients
The Adoption Architecture Assessment (Framework)
For clients working inside organizations or running teams, use this diagnostic:
- Vision clarity (Top-down): Does leadership have a clear, compelling narrative for why AI matters to this organization’s future? Not “efficiency” generically — specific use cases that resonate with the team’s actual work.
- Permission structure (Bottom-up): Is there protected time and psychological safety for experimentation? Do people know it’s okay to try things, fail, and report back?
- Discovery capture (Integration): Is there a mechanism to identify when someone discovers a genuinely useful application and scale it beyond their individual workflow?
- Fear inventory (Cultural): What are people afraid of losing? Name it explicitly. Address it directly.
The “Play Day” model. Recommend clients create a structured quarterly “AI play day” — a few hours explicitly designated for team experimentation with AI tools, with no deliverable required other than a 5-minute share-back. This creates both permission and discovery conditions simultaneously.
Coaching questions:
- “How are you currently using AI? Would your team know about it? Why or why not?”
- “What’s the fear story your team has about AI? Have you named it out loud with them?”
- “If someone on your team found a brilliant AI application tomorrow, what would happen to that discovery?”
- “What’s one place in your team’s work where AI could make their jobs better rather than smaller? Have you told them that story?”
Additional Resources
- MIT Sloan Management Review: AI adoption in organizations research
- The Innovator’s Dilemma — Clayton Christensen (the organizational resistance pattern)
- Harvard Business Review: “Why Is AI Adoption So Hard?” (2024)
- Insight - Trust Before Automation in High-Value Relationships
Evolution Across Sessions
This organizational insight from December 5 builds on the November 27 discussion about abstraction layers — which was primarily about individual AI adoption. The December session zoomed out to the organizational level, and the pattern is structurally identical: the key is matching people to the right layer and giving them genuine agency within it. The health AI case study (DORA/NHS) discussed later in the same session was a live example of this principle working at scale: older patients who knew they were talking to an AI agent had 92% satisfaction — because they understood the layer they were operating in and were given genuine agency within it.
Next Actions
- For me (Lou): Develop a “Leadership AI Adoption Framework” talk track for PowerUp clients who are also managing teams. Position it as the “how to bring your team with you” complement to personal AI fluency.
- For clients: Use the Adoption Architecture Assessment as a diagnostic in any leadership coaching engagement where AI is a current or emerging concern.