PowerUp AI Mastermind — February 26, 2026
Live: three AI models debate in real time, one GEARS client gets onboarded end-to-end
“The goal is not to be right, but to have the best solution possible. That’s your win. That’s your success criteria.” — Lou
This Week in 30 Seconds
- BrowserOS — Donald settled his AI browser decision: open-source, Y Combinator-backed, Chrome extension support, Linux and Mac; works exactly as advertised for agentic use
- OpenClaw and agentic computer use — Don Back’s PhD client is running automated research summaries with Open Claw; the group discussed the security FOMO around agentic tools
- Bally’s PhD coaching win — “I hope you’re making a lot of money, because this has been amazing” — one session, total transformation
- Multi-model debate live demo — Claude, Gemini, and Codex critiquing a shared markdown file in real time, with Marked auto-refreshing to show changes as they happen
- GEARS technical update — multi-domain support, WordPress MU plugin for server-side injection, schema conflict resolution with Yoast, client onboarding run live
- Skill chaining — Kasimir’s question about whether skills can call other skills triggers Lou’s walkthrough of the three-skill GEARS architecture and modular modes
BrowserOS: Donald’s Settled AI Browser Choice
After months of evaluating options — DIA, Comet, and others — Donald Kihenja settled on BrowserOS. His decision criteria were specific: cross-platform (Linux and Mac), native agentic capabilities, and Chrome extension support. BrowserOS hit all three.
The live demonstration Donald described was compelling: he went to Facebook and asked BrowserOS to find all upcoming birthdays — and watched it generate a formatted report automatically. No scripts, no prompting tricks. Just a natural language request to a browser that understood what he wanted.
Lou looked it up during the call and confirmed the Y Combinator backing. The group’s consensus: worth watching closely, and for anyone still unsettled on their AI browser choice, worth a trial.
💡 What This Means for You
If you’re still using a standard browser for AI-assisted work, you’re leaving capability on the table. BrowserOS is free to try, open-source, and works cross-platform. The agentic browsing use case — letting AI do the navigation and data gathering while you review the output — is now accessible without technical setup.
Go Deeper:
- BrowserOS — open-source AI-native browser: browseros.com
OpenClaw, Agentic Computer Use, and the Security FOMO
Don Back shared a vivid data point from a coaching session with a PhD client who has Open Claw running on a dedicated machine. The client’s workflow: tell the bot to research a topic and generate a summary at a specified depth; review the summary, and if it’s interesting, ask it to go deeper. The whole process takes fifteen minutes a day for what would otherwise be hours of research.
The group’s reaction was a mix of admiration and nervousness. Lou was candid: “I’m beginning to get a little FOMO about OpenClaw, I have to admit, but I think I’m just gonna give it time to settle.” His hesitation is principled — giving anything access to your network and desktop requires a level of trust in both the software and your own security hygiene that not everyone is ready to extend.
Kasimir is close to trying it, partly influenced by Matthew Berman’s use of it. Bally mentioned that her mentor has it running with professional security support: “he’s got someone best in the field, so I feel safe.” She offered to invite him to present to the group — Lou immediately accepted.
The implicit message across this thread: the capability gap between what’s possible with agentic tools and what most people are using is widening. The question isn’t whether to engage with it, but when and with what guardrails.
💡 What This Means for You
If you’re doing repetitive research or synthesis tasks that follow a predictable pattern, agentic computer use is now the leverage point worth investing in. You don’t need Open Claw specifically — any agentic setup that can browse, read, and summarize on command applies. The PhD client’s 15-minutes-per-day research workflow is a benchmark worth comparing your current process against.
Bally’s Coaching Win: AI Coaching for Academic Research
Bally Binning shared a moment from her week that the group immediately recognized as signal. A PhD student came to her for a single coaching session. At the end, the student said: “I hope you’re making a lot of money, because this has been amazing.”
Don Back followed with the structural observation: “The university research environment is so inefficient in process terms, you can have a huge impact.” He and Bally noted there might be a collaboration opportunity — both have touchpoints with the academic and research coaching market.
Lou’s read: this is an underserved, high-value market. PhD students, researchers, and academics are drowning in knowledge work that AI can dramatically accelerate — but almost none of them know how to use it well. A coach who does is offering something genuinely scarce.
💡 What This Means for You
If you work with knowledge workers in any capacity, the coaching opportunity isn’t “teach them to use AI” — it’s “show them what’s possible and help them build a workflow that actually fits their work.” The emotional response Bally’s client had is what happens when the gap between someone’s current process and what AI enables suddenly becomes visible.
The Multi-Model Debate: Live Demo With Marked
Lou demonstrated the workflow that generated this session’s most memorable moment — and its highest-scoring insight. The setup: GhostTTY terminal split into three panes (Claude, Gemini, Codex), plus the Marked markdown viewer open on the same file. Each model reads the shared file, appends its critique and additions, and Marked auto-refreshes to show the changes in real time.
The protocol Lou used:
- Create a specification file (for this demo: a skill to take an article idea, flesh it out, and write the article).
- Ask Claude to write the initial spec and save it to the file.
- Ask Gemini to read the file, contribute new ideas, and save updates.
- Ask Codex to do the same.
- Ask a synthesis model (Lou used Opus) to read all contributions, think deeply, and write the final version — keeping the best ideas from all sources, cutting the weak ones, explaining every decision.
The synthesis output was explicit: Opus kept the “extraction interview” idea from Gemini (strong on the human DNA problem), kept the “operating modes” from Codex (strong on measurable quality), and cut the things that were over-engineered or unreliable. It attributed each kept element to the model that contributed it.
Lou’s key insight from extensive use of this workflow: “The perspective and training that comes from using the other models, I find, is usually additive, as opposed to just editive.” Codex caught things Claude had gotten wrong. Claude acknowledged the corrections and incorporated them. The synthesis was better than any single model would have produced.
He also noted what makes this different from automated “LLM council” tools: you can steer at each step. When you see what one model contributed, you can tailor your prompt to the next model to specifically address what’s missing or questionable. The human in the loop isn’t passive — they’re conducting.
Deep Dive: Insight - Multi-Model Debate as a Quality Control System for High-Stakes Work — The full architecture of the multi-model debate workflow and when to apply it.
💡 What This Means for You
For your next high-stakes document — a proposal, a methodology, a course outline — run it through at least two models before considering it done. Give each model the same brief plus whatever the previous model produced. Ask each to critique what’s there and add what’s missing. You don’t need the full GhostTTY setup; you can do this manually across browser tabs.
Go Deeper:
- Ghostty terminal — the terminal Lou used for split-pane multi-model work: ghostty.org
- Marked — the markdown viewer that auto-refreshes on file changes; available in SetApp: setapp.com/apps/marked | markedapp.com | marked2app.com
- SetApp — $97/year Mac utilities subscription that includes Marked, Rocket Typist, TextSniper, Forklift, PopClip, and 100+ others; the group consensus: worth it for the productivity infrastructure alone
GEARS Technical Update: Multi-Domain, WordPress MU Plugin, Schema Conflicts
Lou gave a detailed briefing on the architectural challenges GEARS encountered since last session — the kind of real-world friction that doesn’t show up in demos. Three issues dominated:
Multi-domain support: Alpha users have multiple domains — sales pages, event sites, landing pages — not just one main brand site. The database had to be re-architected so an organization can be associated with multiple domains, with the ontology spanning all of them. The URL is still used to identify which schema to serve, but client identification no longer depends on a single domain.
WordPress server-side injection: The original approach injected a script tag via WordPress’s header/footer mechanism. The problem: the script runs client-side (in the browser), not server-side. This matters because Cloudflare’s new markdown-serving mode delivers page content without executing JavaScript — meaning AI crawlbots that use this path might never see the schema. Lou’s solution: WordPress MU (must-use) plugins, which execute always, independently of theme or plugin settings, and can be designed to inject schema server-side. The MU plugin approach went to the development team the day of the session.
Schema conflicts with existing SEO tools: Many professional sites have years of Yoast-managed SEO schema. Asking an enterprise to remove Yoast isn’t realistic. GEARS now ingests existing schema, preserves SEO schema unchanged, and merges graph schema in a way that resolves conflicts — with multiple merge strategies (preserve theirs, use ours, blend, or use ours for graph schema only). This makes GEARS deployable in enterprise environments without disrupting existing SEO.
The still-open question: do AI crawlbots execute JavaScript? The answer matters enormously for whether client-side schema injection actually reaches AI search engines. Lou’s working assumption — that they do, because Google Tag Manager and Facebook Pixels are both client-side JavaScript that crawlbots demonstrably process — is reasonable but unconfirmed.
💡 What This Means for You
The schema-injection challenge Lou is solving is the same one anyone building for AI discoverability will eventually face: how do you get structured data in front of AI engines when the rules aren’t yet standardized? The server-side approach (MU plugin for WordPress) is the most reliable current answer. If you’re building for GEO, favor server-side rendering over client-side scripts wherever possible.
GEARS Live: Client Onboarding End-to-End
Lou ran Elizabeth Stief’s onboarding live, in front of the group. The process unfolded in real time:
- Lou gave GEARS Claude the link to Elizabeth’s Google Drive folder containing her intake materials.
- GEARS browsed the folder, confirmed access to each file, and read the content.
- On command: created her organization record in Supabase, registered all five domains, configured brand settings, voice, and customer profiles.
- Generated ontology schemas for each registered page.
- Produced a client-facing report summarizing everything captured — entities, relationships, brand attributes, credentials.
- On request: displayed the full JSON-LD schema that would be injected on her primary domain.
Lou noted the thoroughness: Elizabeth had provided substantial intake materials, and the resulting entity table was large. The schema demonstrated cross-references between her author identity, LinkedIn, YouTube, and credentials — exactly the kind of interconnected authority signal GEO is designed to create.
One live-caught error: the schema initially included a search action for Elizabeth’s GoHighLevel site — a WordPress convention that doesn’t apply to GoHighLevel pages. Claude caught it when Lou probed, removed it in real time, and added a rule to its logic to avoid the same mistake for similar platforms. This is what alphas are for.
Lou also had GEARS generate platform-specific installation instructions on the fly — separate guides for WordPress, Kajabi, GoHighLevel, and other platforms — and dropped them in the chat for Elizabeth.
💡 What This Means for You
The live onboarding demonstrated something worth internalizing: the entire process from raw intake data to deployable schema takes less than 20 minutes when the workflow is well-designed. The bottleneck isn’t the technology — it’s getting clean intake data. If you’re participating in the Alpha, your intake form quality directly determines the quality of your ontology.
Skill Chaining: Kasimir’s Question That Opened a New Thread
Kasimir asked what seemed like a technical question and turned into a session-defining discussion. During Lou’s article-writing skill demo, Kasimir asked: “Can that skill be chained to another skill — instead of chain of thought, chain of skills?”
Lou’s answer: yes, as long as all the skills are in the .claude folder, Claude can reference and invoke them. You can create an orchestration skill — a skill whose entire job is to call other skills in sequence, checking success criteria at each stage before advancing.
This led directly into a walkthrough of the GEARS architecture as the live example of skill chaining done well:
- Three discrete skills: gears-init (database setup), gears-onboard (client intake and schema generation), gears-content (content hub generation)
- Each skill contains an orchestration router that decides which “modes” to invoke
- Modes are sub-skills without the formal SKILL.md front matter — think of them as functions
- A shared folder at the same level as the skill folders contains documentation and specifications that all three skills can access
- Skills can invoke other skills when they detect a dependency — onboarding can call initialization if the database isn’t set up yet
Lou’s key architectural principle: single responsibility at the skill level, progressive disclosure in the context loading. A monolithic skill that tries to do everything loads all its instructions at once and becomes expensive and fragile. Modular skills load only what’s needed for the current task.
Deep Dive: Insight - Skill Chaining — Build Modular AI Pipelines Instead of Monolithic Prompts — The design principles for building skill architectures that compose cleanly and scale without breaking.
💡 What This Means for You
If you have a complex AI workflow that currently lives in one long prompt, try mapping it as a chain. Identify the distinct stages. Ask: what information does each stage need that the previous stage doesn’t? What’s the success condition for each stage before the next one begins? That map is your skill chain architecture.
LLM as UI: Building Without a Traditional Interface
A brief but structurally significant concept Lou introduced toward the end of the session. While explaining the GEARS architecture, he described his decision to use Claude as the user interface for GEARS rather than building a traditional web UI.
The reasoning: he was spending enormous time and API cost on UI iteration — designing dropdowns, managing pop-ups, maintaining layouts across browsers. The functionality he needed was all in the backend; the interface was overhead. So he asked: what if the LLM was the UI?
The answer he landed on: for internal tools, development tools, and anything where the user base is small and sophisticated, natural language is a better interface than a designed UI. You get faster development, zero maintenance overhead on the visual layer, and an interface that can handle ambiguous requests gracefully — something a traditional UI cannot do.
Lou paired this with a practical workflow for building apps quickly: mock up the functionality as a series of prompt instruction sets, run anything programmatic in Python via a virtual environment that Claude Code manages, and iterate until you have something that works — then consider whether a traditional interface is actually necessary.
Deep Dive: Insight - Use the LLM as the UI — Conversation as Interface for Internal Tools — The full framework for when to use conversational AI as your only interface, and how to build tools without a traditional frontend.
💡 What This Means for You
The next time you’re tempted to build or commission a UI, ask whether your users actually need a designed interface or whether they’d be fine with a conversational one. For internal tools, the answer is often “conversational is fine.” You can build something functional in a day instead of a month.
Community Corner
Don Back shared that the process of preparing his GEARS intake form prompted him to do something he’d been avoiding: audit his website against his current thinking. He put his Canon and ontology documents alongside his homepage into Claude, and Claude told him his homepage copy was “180 degrees opposite” of what he now believes. “That homepage was written five years ago.” He’s been rewriting page by page since, using Claude to generate the copy and his own judgment to refine it.
Elizabeth Stief asked a thoughtful question about whether pure-HTML pages (generated by Claude directly into GoHighLevel’s code block, bypassing the visual editor) would work with the GEARS schema system. Lou confirmed: the script goes in the header, not on the page, so the page construction method doesn’t matter.
Elizabeth also proposed a shared Google Doc FAQ for GEARS Alpha participants — a place to collect and answer common questions without flooding Telegram. Lou agreed immediately and committed to setting one up in his post-session email.
Donald Kihenja noted that Rocket Typist (via SetApp) is his most-used productivity tool: “dt” inserts the current date, “dtt” inserts date and time. A small thing, but the kind of small thing that adds up across a day of high-output work.
Alex Flueck confirmed that Marked is available in SetApp and provided three links for members who want to install it. He also noted that Marked’s side-by-side view — markdown on one side, rendered output on the other — is particularly useful for editing.
Links Shared in Chat
- BrowserOS — open-source AI-native browser, Y Combinator-backed: browseros.com
- Ghostty terminal — terminal emulator used for GhostTTY split-pane multi-model setup: ghostty.org
- Marked (via SetApp) — markdown viewer that auto-refreshes on file change: setapp.com/apps/marked
- Marked 2 (official site) — markedapp.com
- Marked 2 (dedicated site) — also available in Mac App Store: marked2app.com
Try This Before Next Session
Run a two-model critique on your most important current document or decision.
- Identify something that matters: a proposal, a framework document, a coaching methodology, a client deliverable.
- Get Claude’s initial pass — standard quality.
- Copy that output into a second AI (Gemini, GPT-4o, or Codex). Ask: “Critique this rigorously. What’s missing? What’s naive? What’s stronger than it looks? Add your best ideas and flag everything you’d cut.”
- Bring both responses back to Claude. Ask it to synthesize: “Here are two passes on this document. Take the best ideas from both, cut the weak elements, and write the strongest possible version. Explain every significant change you make.”
- Compare the final synthesis to the original. Note specifically what changed and whether you would have gotten there without the second model.
Open Threads
- Do AI crawlbots execute JavaScript? The answer determines whether client-side schema injection actually reaches AI search engines — and whether the MU plugin approach is necessary for everyone or just Cloudflare-hosted sites.
- What’s the minimum viable two-model critique — which combinations produce the most complementary feedback for which task types?
- Skill chaining failure modes: what happens when a chained skill fails mid-sequence? How do you handle rollback without losing the work of earlier stages?
- For enterprise clients with years of SEO infrastructure, is there a positioning strategy for GEARS that frames it as augmentation rather than replacement — and does that framing change the authority signal?
Next session: March 5, 2026
Derived Artifacts
- SKILL (Multi-Model Live Debate — Lou demo)
- brand-alignment-audit (Brand Alignment Audit — Don Back’s homepage positioning contradiction)