2025-06-19 AI Mastermind

Table of Contents

Session Overview

The June 19 session opened with some administrative housekeeping — a Telegram group migration had accidentally removed all members, creating a brief logistical scramble. Once the group reconvened, the session moved quickly into substantive member sharing.

Dirk returned with two significant updates: a discovery that AI research prompts are most valuable not for producing final answers but for generating sharp questions — reframing the AI as a questioning machine rather than an answer machine. He also described his challenge of tagging and sorting 24,000 LinkedIn contacts from a CSV export, which led to a group troubleshooting session on context window limits, chunking strategies, and automation-based approaches for large dataset processing.

Kasimir gave a live screen-sharing demo of how he had adapted the “infinite prompting” framework in ChatGPT, generating content through four different lenses (emotional narrative, research-heavy, contrarian, archetypal/spiritual) and then using a self-generated scoring rubric to synthesize the best elements of each into a final “superscript.” Lou noted the elegance of the approach — using the scoring to let the AI select across its own versions.

The session also surfaced a practical comparison of AI platforms for different use cases: ChatGPT for large CSV processing (context window advantage), Claude for writing quality and self-verification of citations, Gemini 2.5 Pro Flash for large-context tasks at low or no cost, and TypingMind as a cross-platform memory and organization layer.

High-Signal Moments

  • Dirk’s reframe: AI as questioning machine — using research prompts to generate the questions I should be asking my clients, not just analysis
  • Lou identifies the “stochastic average” problem: AI without specific instructions returns the most common/represented views in training data; instruction to find unique or missing perspectives pushes output toward the edges of the distribution
  • Kasimir demos a 4-lens content generation system (emotional/research/contrarian/archetypal) with self-generated scoring rubric and final synthesis — a practical implementation of multi-perspective prompting
  • Large CSV data challenge: 24,000 LinkedIn contacts exceed ChatGPT’s capacity for in-prompt processing; solution path = Make.com automation looping row-by-row with AI inference per row
  • Platform comparison emerges organically: Claude better at self-skepticism about citations; ChatGPT better at large context/CSV; Gemini 2.5 Pro Flash for ultra-long context at low cost
  • Kasimir mentions TypingMind for cross-platform memory and project organization — a tool others in the group weren’t fully aware of
  • Lou suggests a cost estimate for processing 24,000 contacts via Make: approximately 29/month plan — concrete ROI framing for automation investment

Open Questions

  1. How does the “AI as questioning machine” use case change coaching intake and pre-session preparation practices?
  2. At what scale does processing structured data (CSVs, databases) require moving from AI tools to proper automation pipelines?
  3. Is there a model that does both quality writing and reliable citation verification, or is the two-model pipeline (ChatGPT research → Claude writing) the current best practice?
  4. How do you teach clients to recognize the “stochastic average” problem in their own AI use?
  5. What are the practical limits of the multi-lens content generation approach — when does it take more time than it saves?

Suggested Follow-Through

  1. For Lou: Design a “Pre-Session Question Generator” as a standalone process prompt; share with mastermind members
  2. For Dirk: Test the Make.com row-by-row automation approach for LinkedIn contact tagging; report results on cost and accuracy
  3. For Kasimir: Share the 4-lens content generation process prompt via Telegram so others can replicate
  4. For all members: Try the “stochastic average” reframe — after receiving an AI analysis, explicitly ask for “what is underrepresented or missing from this?” and compare the outputs
  5. For Lou: Explore Gemini 2.5 Pro Flash for large-context tasks and report findings to the group

Additional Resources

Ideas from Chat

Derived Artifacts

  • meta-prompt (Meta-Prompt — Kasimir Hedstrom’s infinite prompt engine)