2025-09-25 AI Mastermind

Table of Contents

Session Overview

Lou opened this session having just lost his aunt the previous day, and the session carried a reflective, generous quality as a result. He apologized for not having completed the planned Prompt2Agent research paper demo, and offered two improvised but high-quality contributions instead: a live 7-minute build of a prompt management library deployed publicly on Cloudflare Workers, and a detailed description of the multi-agent debate architecture he had been developing.

The Cloudflare Workers prompt library demo was genuinely remarkable — in under 7 minutes from a single paragraph prompt, Lou had a working web application with a database, CRUD UI, and the beginning of an MCP server backend, deployed publicly on Cloudflare’s edge network at zero hosting cost. The MCP connection to Claude Desktop had a URL configuration issue that wasn’t resolved in the session, but the core application worked. This established Cloudflare Workers as potentially the fastest path to a publicly deployed front-end application for non-traditional developers.

The multi-agent debate discussion grew organically from Dirk and Bally’s sharing about context engineering challenges. Lou described a manual facilitation approach (human-as-context-manager, routing the same context to Grok/Claude/GPT simultaneously, integrating responses, re-circulating) and then sketched the N8N automation path: parallel routing to model nodes, response integration, loop with evaluator. Group member Bally connected this to persona-based facilitation in organizational coaching, opening a rich discussion about how profile-based expert agents could make the debate framework directly applicable to coaching client work.

High-Signal Moments

  • Cloudflare Workers live build: one paragraph prompt → full working prompt library app deployed publicly in 7 minutes — “I didn’t have to set up a tech stack, it’s all somehow magically in the cloud on the Cloudflare infrastructure”
  • Cloudflare Workers edge hosting is currently free — “you don’t have to pay for hosting or worry about all that kind of stuff”
  • Prompt2Agent research paper previewed: reads a research paper and deploys agents to implement it — “I want to implement the paper’s recommendations, set me up” — flagged as highly interesting, demo deferred to next session
  • Multi-agent debate framework described: same context → 3 parallel LLMs → integrate responses → re-circulate → evaluator stops by rubric — “a multi-expert version of ReAct — reflect and act”
  • Context engineering principle reinforced by Dirk and Donald’s experiences: AI tools regularly give outdated API answers because they rely on pre-training; giving current documentation links as context dramatically improves accuracy
  • Donald’s story: ChatGPT gave wrong App Script code; Claude corrected it with “the code changed on this day by Google” and proof — concrete example of why documentation grounding matters
  • Lou: “before you answer me, check this in the most recent documentation, get me the most recent API specs” — the habit of providing documentation links as context before any integration question
  • Bally’s insight: the multi-agent debate framework maps to Talent Dynamics / organizational coaching profiles — “a Creator would bring spring energy and fresh ideas, then you pass to the next profile in the cycle”
  • Lou’s CBT coaching story: Lou’s CBT coach said “we’re already not hiring as many” — direct real-world evidence of AI-driven job reduction in coaching-adjacent fields, used as a motivation for the group to build their AI capabilities now
  • Lou’s framing on compounding: “If you get in early, while it’s compounding, you’re gonna get the benefits of that compounding. If you get in after the compounding — too late.”

Open Questions

  • The MCP URL configuration issue with the Cloudflare Workers prompt library — what is the correct format for Claude Desktop to recognize a remotely hosted MCP server?
  • Cloudflare Workers supports front-end and edge code — how does it handle backend persistence for larger applications with real database requirements? (Lou suggested Supabase or Qdrant as the persistent layer)
  • For the multi-agent debate N8N flow, what is the right rubric for the evaluator to determine “sufficient quality”? How do you avoid the loop running indefinitely?
  • How do you prevent context growth from becoming unmanageable over many debate cycles, given that each cycle adds to the integrated context?
  • What is the ideal number of agents in a debate loop — does 5+ improve quality or just slow down convergence?

Suggested Follow-Through

  • Lou: Complete the Cloudflare Workers MCP URL debugging and confirm Claude Desktop connection works
  • Lou: Demo the Prompt2Agent paper next session as planned
  • Lou: Build the N8N multi-agent debate workflow to share with the group (promised for Oct 9)
  • Dirk: Continue with Airtable + N8N CRM architecture — give documentation links for Airtable API and MCP server before every session
  • Bally: Prototype the profile-based expert agent configuration using Talent Dynamics archetypes — even a manual 3-agent test would be valuable
  • Members: Run the context engineering habit experiment — for your next technical AI task, find and provide the official documentation link before asking the question

Additional Resources

Books & Articles Mentioned

  • None

Ideas from Chat

  • Ri Ca proposed a “spy agent secretly working in the background” — the concept of an ambient AI that monitors and assists passively; Elizabeth described it as “like a consultant live”; this is essentially what Cluely implements
  • Donald mentioned Vocable.ai as a tool worth exploring (no URL shared; context unclear)
  • Donald Kihenja: “What if agents read the screen in real time as their input?” — the screen-reading agent concept, now commercially available via Cluely