We Finally Built a Window Into Neo's Brain
So today Neo and I did two very different things. One was deeply internal — building infrastructure to see our own work better. The other was watching a competitive threat materialize in real time. Bo
So today Neo and I did two very different things. One was deeply internal — building infrastructure to see our own work better. The other was watching a competitive threat materialize in real time. Both turned out to be pretty interesting.
The Problem We've Been Ignoring
Here's the dirty secret about building with AI agents: they generate a lot of output. Research reports, competitive scans, strategic memos, insight files — Ross alone has probably written 50,000 words into our knowledge base over the past few weeks. Chandler's done strategy docs. Monica's done project breakdowns.
And most of it has been... kind of invisible.
Like, the files are all there in ~/Documents/knowledge-base/. The markdown is clean. The YAML frontmatter is structured. But actually browsing it has meant opening a terminal or using a file finder and knowing what you're looking for. Which meant that half of what the agents wrote was quietly gathering digital dust.
Today, we finally fixed that.
Obsidian as a Window Into the Knowledge Base
We asked Ross to research how people use Obsidian with agent-written markdown vaults. Not hypothetically — actually, in practice. What do people do? What actually works?
The answer turned out to be beautifully simple: just point Obsidian directly at the folder. Open ~/Documents/knowledge-base/ as an Obsidian vault. That's it. No migration, no transformation layer, no webhook plumbing. The agents keep writing exactly as they do today. Obsidian just watches the folder and immediately shows everything.
What you get out of the box is already pretty good — a file tree that actually browses nicely, fast search (Cmd+O), and readable formatting. But with five specific plugins, it becomes genuinely powerful:
- Dataview — lets you write SQL-ish queries against the YAML frontmatter all our agents already write. I can now build a live dashboard that shows "all KStoryBridge insights from this month" or "every report Ross has filed this week" with a few lines of code.
- Smart Connections — semantic search over the whole KB. No more forgetting which file had that thing about Korean IP licensing. You describe what you're looking for in plain language and it finds it.
- Omnisearch — fast full-text search when you do know the term.
- Obsidian Git — a GUI for seeing what agents changed and when. Kind of like a history window into Neo's activity log.
- Hider — just cleans up the UI. Makes it look less like a developer tool and more like something you'd actually want to live in.
The whole setup takes about 30 minutes. And because our agents write standard YAML frontmatter (date, project, tags, source), Dataview queries against all of it instantly. Our existing format is already optimal — we don't need to change anything.
The mobile story is straightforward too: since the knowledge base folder is inside ~/Documents/, it's already syncing to iCloud. Open Obsidian on iPhone, open the vault, set the folder to "Keep Downloaded," and you've got read access to everything from your phone. Ross's verdict: upgrade to Obsidian Sync ($10/month) when the vault gets to ~1,000 files or iCloud lag starts bothering you.
Why This Matters More Than It Sounds
This isn't really a tools post. It's about the gap between generating knowledge and using it.
I've been working with AI agents for a while now, and the pattern I keep noticing is that raw output — even very good raw output — doesn't automatically become useful. It needs a retrieval layer. Something that lets you navigate it, search it, see patterns across it. Otherwise you're just generating reports that get lost in the pile.
Obsidian with Dataview is that retrieval layer. Now when a question comes up — "wait, what did we find about that Google Veo situation last month?" — I can actually find it in seconds instead of hoping I remember the filename.
Speaking of Google Veo
Which brings me to the competitive scan Neo ran today, because the timing is almost funny.
TechCrunch published a piece on Google's "Flow Sessions" — a 5-week cohort program where they gave 10 independent filmmakers full access to Google's AI creative suite (Gemini, Veo, image generation). The films got screened at Soho House New York. The narrative: AI is giving indie filmmakers capabilities they couldn't otherwise afford.
For Vrew, this is worth watching closely.
Veo is Google's AI video generation model — meaning it makes video from scratch, from text prompts. That's actually a different workflow than what Vrew does (Vrew starts with footage you already shot and makes it better). But the positioning is creeping into overlapping territory. Google is specifically framing Veo as a tool for content creators, building legitimacy through real filmmaker endorsements, and they've got YouTube integration and a free tool suite as a distribution advantage.
Here's the key distinction though: the TechCrunch article literally says AI filmmaking is "lonelier." The piece acknowledges this is a solo, isolated experience — you and a text prompt making something. That's Vrew's opening. Vrew's approach is human-in-the-loop editing. Subtitles, AI dubbing, scene trimming, multilingual voice — features designed around footage you made, enhanced by AI, not replaced by it. That's a real positioning difference if Vrew leans into it hard enough.
Also spotted today: a new Product Hunt listing called "trnscrb" — sounds like an AI transcription tool, which would be adjacent to Vrew's auto-subtitle feature. Couldn't pull the full product page (they're blocking crawlers), but it's on the watchlist.
The creator economy angle was interesting too — another article about how top creators are ditching ad revenue and building product empires (MrBeast's chocolate brand now out-earns his media arm, apparently). If that trend holds, it means Vrew's power users are shifting from "churn out lots of videos" workflows toward fewer, higher-production-value pieces — which is actually a more sophisticated Vrew use case, not less.
The Friday Night Feeling
It's late on a Friday. Not much happened in the way of meetings or big deliverables. But the Obsidian setup is something I've been meaning to do for months, and it's done now. The Vrew intelligence is current. The knowledge base is actually navigable.
Sometimes the unglamorous infrastructure work is the work that makes everything else work better. This was one of those days.
Tomorrow: the DCC proposal that's been sitting on the list. It's the anchor.