Original Insight

“I just ask: did you learn anything of high value, of high impact? And then it analyzed it. Yes, like, this is the kind of high impact things. And okay, store them there. And now I kind of automated that so that I don’t have to do that every time — and at least Claude’s answer to that was, ‘let’s see where we are in 6 months, I’m going to be so much smarter.’” — Kasimir Hedstrom

“It has like this voice authenticated — it checks that what comes out sounds like me. If it doesn’t, it flags it, and then it starts to tweak it so that it has, I think the threshold is 70% or 80%, it needs to sound like me.” — Kasimir Hedstrom

“If you put stories in your memory, then what you can do is say, look back at my stories and see if any one of them is relevant as a learning — when you tell people, oh, this happened to me, and therefore I learned that marketing hooks should be XYZ.” — Lou

Expanded Synthesis

Every AI conversation currently suffers from the same structural limitation: the moment you close the chat window, the relationship resets. The model has no memory of the 47 sessions you’ve had with it, the insights you’ve developed together, the writing style you’ve painstakingly taught it, the stories that are foundational to your voice. You start over. Every time.

Kasimir’s MCP memory experiment is significant not because it’s technically novel (memory systems have existed in various forms), but because it demonstrates a practical, low-overhead way to build something that behaves more like a genuine working partnership over time.

What Kasimir built:

A local MCP (Model Context Protocol) database that captures high-impact learnings from each Claude session automatically. The system:

  • Monitors conversation quality and captures insights after approximately 20 messages
  • Analyzes multi-session patterns every 3 sessions to identify what’s worth retaining
  • Stores writing style guidelines including a blacklist of AI-typical patterns (the “threes and sixes” problem — AI’s tendency to default to lists of 3 or 6 items)
  • Implements a voice authentication threshold: output must meet a 70-80% similarity score to Kasimir’s writing style before publication
  • Flags deviations automatically and triggers revision

Why the “voice gatekeeper” idea is particularly powerful:

One of the most persistent challenges in AI-assisted writing is avoiding the generic. AI knows what good writing looks like in aggregate — which is exactly the problem. “Good in aggregate” is not “distinctively yours.” Kasimir’s system addresses this by making the AI an active enforcer of his personal style rather than a passive generator of serviceable prose. The model learns his patterns not just to imitate them but to flag when it’s drifting toward its own defaults.

The 30% platform-specific flexibility allowance is also smart design: it recognizes that LinkedIn, YouTube scripts, and personal essays should have the same voice but not identical formatting and register. The gatekeeper is flexible enough to accommodate context without losing identity.

The cross-platform memory vision:

Don Back identified where this leads: if you build your personal knowledge and voice profile into an MCP database that supports the protocol, you can plug that memory into any AI that supports MCP — Claude, ChatGPT (which announced MCP support in mid-2025), Gemini. You carry your expertise, your stories, your style preferences, your brand assets, and your institutional knowledge into every conversation on every platform. Write once, remember everywhere.

Lou extended this further: if you store your personal stories in the memory database, the AI can retrieve relevant stories when you’re writing content — “look back at my stories and find one that illustrates today’s lesson about marketing hooks.” This is the hook-story-offer structure automated and personalized. Instead of manually remembering which story fits, you ask the AI to find the best fit from your story archive.

The compounding nature of this investment:

This is not a setup-once tool. It is an investment that compounds. The more sessions you run, the more patterns the system learns. The more stories you store, the richer the retrieval. The more style deviations it catches, the more accurately it represents you. In six months, as Claude predicted to Kasimir, the system is substantially smarter about its owner. This is a different relationship with AI than most people currently have.

The Mem0 reference:

Lou pointed to Mem0 (mem0.ai) as an established open-source implementation of exactly this concept, with an MCP-compatible version available. Worth investigating as either a starting point or a comparison benchmark for anyone building a similar system.

Practical Application for PowerUp Clients

The Personal AI Memory Architecture:

Layer 1 — Story Library Document your most instructive personal and professional stories: the defining moments, the lessons learned, the client transformations. 5-10 stories is a starting point. Store these in a retrievable format (structured text, tagged by topic).

Layer 2 — Voice Profile Create a document that captures:

  • Your typical sentence length and rhythm
  • Phrases and expressions that are distinctively yours
  • Words or structures you consistently avoid
  • 3-5 examples of writing you consider closest to your ideal voice
  • Platform-specific adjustments (how your LinkedIn voice differs from your email voice)

Layer 3 — Institutional Knowledge Your frameworks, methodologies, client outcome patterns, signature exercises. The things that make your coaching yours. These don’t need to be in final form — rough notes work if they’re specific.

Layer 4 — Learning Capture After any significant AI session or project, prompt the AI: “What did you learn about me or my work in this session that would be valuable to remember?” Store the best outputs.

Getting started (without MCP setup): If you’re not ready to build a local MCP server, you can simulate this with a running Google Doc that you paste into each chat session. Not as elegant, but the same principle: carry your context forward.

The voice authentication DIY: After any piece of AI-generated content you intend to publish, ask: “Does this sound like me? Score it 1-10. What specifically deviates from my typical voice?” Build a list of those deviations over time — you are manually building your style gatekeeper.

Journal prompts:

  • What are the 5 stories that most define my coaching philosophy and would be most useful to have on demand?
  • What patterns do I notice in how AI writes versus how I write? What are my style’s distinctive signatures?
  • What would I want an AI working partner to remember about me after 100 sessions together?

Additional Resources

Evolution Across Sessions

This insight is the most forward-looking of the July series. Kasimir’s experiment is in its early days — he has been running it for just a couple of days as of the July 24 session. The concept of a self-improving, voice-authenticated, cross-platform memory layer is genuinely novel in this group’s practice. It anticipates the direction AI tools are heading and positions early adopters for significant leverage.

Next Actions

  • For me (Lou): Explore Mem0 as a potential starting framework; document the “story library + voice profile + institutional knowledge” architecture as a coaching resource
  • For clients: Create your Layer 1 Story Library as the minimum viable personal memory investment — 5 stories, structured and tagged, stored somewhere you can paste into AI sessions