2025-09-04 AI Mastermind
Table of Contents
Session Overview
The September 4th session was a hands-on technical demonstration focused on building a self-hosted AI infrastructure stack for knowledge entrepreneurs. Lou walked the group through the full architecture he had been setting up: a Hostinger VPS with Coolify as a deployment interface, Docker containers running Open Web UI and Qdrant as a vector database, all connected to frontier models via API rather than expensive GPU rental. The session included a live deployment of a new Open Web UI instance with domain routing through Cloudflare DNS.
The session positioned itself as the “deploy it yourself” continuation of prior sessions on local LLM installation. The key philosophical shift introduced was moving from consumer of AI tools to owner of AI infrastructure — with all the cost, brand, and data sovereignty advantages that entails. Lou demonstrated that a fully functional, branded AI knowledge base could be deployed for approximately $7/month in hosting and near-zero in inference costs.
A secondary thread covered the research landscape for prompting techniques, with Lou sharing a summary of a Stanford/OpenAI/Anthropic/Google research paper on effective prompting. The finding that only 58 out of thousands of known prompting techniques are meaningfully effective — and that 4 of the top 6 were already in the group’s practice — validated the group’s existing approach while providing useful refinements.
High-Signal Moments
- Lou demonstrated a live deployment of Open Web UI on a Hostinger VPS using Coolify, completing it during the session including a DNS resolution hiccup that was troubleshot in real time
- Key stack recommendation: Hostinger (CPU VPS, 10/month
- Strong warning against GPU rental VPS: “1,500/month for always-on inference” is not viable for small operators
- Recommendation to use Groq API for open source models: Lou reported spending $1.89 in a week of heavy use running GPT-4o-open-source on Groq
- Cole Medin’s GitHub RAG templates were recommended as a starting architecture (github.com/coleam00)
- Prompting research finding: “the things that we’ve been focused on tend to be the ones that are still working the best — iteration, chain of thought, context engineering”
- Live demo of context engineering in action: Lou described filling a legal AI’s context with research-retrieved evidence, then prompting as “a judge who’s an expert in construction law” — producing dramatically superior output
Open Questions
- How do non-technical members get from “I want this” to “it’s running” without getting stuck on the Docker/Linux learning curve?
- What is the right fair-use policy for a shared community AI instance Lou was planning to provide?
- How should the knowledge base be structured for retrieval — what chunking and tagging strategies work best for coaching content?
- When does it make sense to move from API-based inference to local/hosted models, and what’s the trigger threshold?
- What security practices are non-negotiable vs. optional for a small-operator VPS deployment?
Suggested Follow-Through
- Share the Coolify security setup video links in Telegram (from the Hostinger/Coolify-specific channel Lou referenced)
- Create a simplified “3-step setup guide” for members who want to deploy their own instance without full command-line experience
- Deploy the shared AIMM knowledge base at aim.coachlu.com with session transcripts uploaded as the first knowledge source
- Share the Gamma presentations on prompting techniques (both the research-faithful version and the knowledge-entrepreneur tutorial version)
- Each member: identify one knowledge asset (course content, frameworks, transcripts) that could form the seed of a self-hosted AI knowledge product
Additional Resources
Links & Tools Shared in Chat
- [Coolify — open-source self-hosted deployment UI (Docker/app management)] — https://coolify.io (shared by Elizabeth Stief)
- [YouTube — Coolify overview/tutorial] — https://youtu.be/ELjlhNT7-5g (shared by Lou)
- [YouTube — related infrastructure video] — https://youtu.be/K4YOTAI5IeI (shared by Lou)
Books & Articles Mentioned
- None
Ideas from Chat
- Donald Kihenja proposed the group build their own “MindChat” — a shared custom GPT as a class project, combining each member’s expertise into a single conversational knowledge base
- Donald noted his custom GPT had been shared to the group’s Telegram, functioning as a “compassionate coach” and engineered manager — validating the concept of a personal AI designed around your leadership style
- Don Back: “It is all an experiment. Ship and gather data. Analysis paralysis is just paralysis.” — a recurring group principle reinforced in discussion
Derived Artifacts
- brain-builder (Brain Builder — the ‘Brain’ business model)