2025-11-13 AI Mastermind for Leaders
Table of Contents
Session Overview
The November 13 session was a live hands-on workshop — the group building a Pinecone-to-ChatGPT integration together in real time, with Lou guiding and participants working alongside him on their own screens. The session had a rough start: the recording almost didn’t happen (Kasimir’s note-taker triggered the “recording” notification, which caused confusion about whether the main Zoom was capturing), and several participants encountered technical obstacles throughout.
The working session format was both the session’s strength and its challenge. Participants got to experience the actual friction of building AI integrations — incomplete schema outputs, API endpoint confusion, the moment of “which assistant name do I use?” — and Lou modeled the correct response to each obstacle: read the error, diagnose before retrying, ask the AI to show the full context rather than a partial fix. The live troubleshooting was more instructive than any clean demo would have been.
Lou’s narrative throughout connected the technical work to the strategic purpose: a Pinecone assistant loaded with your documents and connected to a custom GPT gives you a conversational interface to any body of knowledge — your past client work, your course materials, your research — that retrieves by semantic relevance rather than keyword. And the brief you give when configuring that system is what determines how good the retrieval is.
High-Signal Moments
- The schema generation workflow — give the AI the Pinecone API documentation and your endpoint details, ask it to generate a schema suitable for a ChatGPT custom GPT action — was demonstrated in multiple iterations, showing the debugging process as a normal part of the build
- Lou’s framing of “the description field as a directive” was a useful UI insight: treating the GPT’s description as system instructions (not metadata) changes how the model interprets queries
- The distinction between Pinecone Index API and Pinecone Assistant API was flagged as a common confusion point — one is the database, the other is the conversational layer built on top of it
- The “proxy” framing for the custom GPT (its job is to relay queries, not infer) clarified the architecture: the intelligence and inference happen at the Pinecone assistant level, not the GPT level
- Lou explicitly named the “rich brief removes the need for human review” principle — and connected it to the October sessions’ content pipeline: the more thinking you invest upfront, the less supervision the automation needs downstream
- The Manus vs. Perplexity research comparison (introduced by Dirk) surfaced a useful evaluation heuristic: depth vs. speed tradeoff — Manus runs longer and produces more comprehensive results at higher cost, Perplexity is faster and cheaper but stops earlier
- Elizabeth’s question about too many GPT clarifying questions led to a practical tip: direct the AI to “structure a regular query” rather than offering structured options — removes friction in the configuration phase
Open Questions
- What is the right level of GPT system prompt specificity for a Pinecone proxy — how directive should it be about retrieval behavior?
- When does the Pinecone Assistant approach become preferable to a local Qdrant instance — what are the tradeoff criteria?
- How do we build a Pinecone knowledge base that is genuinely more useful than a well-organized document library — what does the retrieval quality actually add?
- What is the right “brief depth” threshold that allows an automation to run without human in the loop — how do we know when we’ve said enough?
- How does the brief-first model apply to coaching program design — can it replace the traditional curriculum development process?
Suggested Follow-Through
- Complete the Pinecone-to-ChatGPT integration using the schema generated in this session — test it with 5–10 real queries and assess retrieval quality
- For participants who got stuck: revisit the specific error you hit, use Lou’s diagnostic approach (read the error, change one thing, test again), and report back in the next session
- Apply the “minimum viable brief” concept to one upcoming project this week: invest 15–20 minutes in a structured brainstorm before doing any creation or execution
- Compare Manus and Perplexity on one specific research task to calibrate your personal tradeoff between depth and cost
- Review the session recording specifically for the schema debugging sequence — this is the best model available for how to approach AI system errors in real time
Additional Resources
Links & Tools Shared in Chat
- How to simply load files on Qdrant (YouTube) — shared by Elizabeth Stief; context: she watched this after the previous call, unsure of direct relevance to the Pinecone work but flagged for the group
- n8n Workflow Templates — shared by Elizabeth Stief; the n8n workflow library for discovering pre-built automation templates
- SSD Nodes — shared by Lou; context: VPS/server hosting provider, likely mentioned in connection with self-hosting options discussed in session
Books & Articles Mentioned
- None.
Ideas from Chat
- Pinecone PDF file size limit: Ri Ca encountered the Pinecone 10MB file size limit error (
size: 10003866, limit: 10000000). Practical note for anyone building Pinecone-based knowledge bases: files must be under 10MB. - Kasimir’s 50+ GPT problem: Kasimir noted he has “50+ GPTs” and has started to forget which GPT he built for which purpose — pointing to a real knowledge management challenge for power AI users. Worth noting as a GPT inventory/consolidation use case.
- Charging for services reflects hidden struggle: Donald Kihenja — “When you have to charge for your services, you should remember all these struggles.” An authenticity anchor for coaches pricing technical AI work.
- Error recovery as the core learning: Don Back — “I’m delighted that I caught the core learning of today’s session.” (After the session’s technical difficulties.) A useful reframe: when a live demo fails, the error-handling is the curriculum.