← Back to Library

You Don't Need SaaS. The $0.10 System That Replaced My AI Workflow (45 Min No-Code Build)

Your AI doesn't have a brain. And what I mean by that is it lacks a system that allows it to reliably read and think through context you've developed over months or years — context that lets it be proactive with you. Nate B Jones published a guide on second brain systems last month. It was super popular. A lot of people built it. A lot of people improved on it. You can use Zapier. You can use Notion. You can use N8N. You can use an MCP server. You can use Obsidian. He has all those pieces. But what he doesn't have is the agent piece, and that matters — because in the intervening period over the last few weeks we are now at a point where agents are becoming mainstream. Anthropic is working on one. OpenAI hired Peter Steinberger, the inventor of OpenAPI. OpenAPI itself passed 190,000 GitHub stars and spawned over one and a half million autonomous agents in just a couple of weeks.

We need a second brain system that is agent readable. And so what Jones lays out here is the architecture for what he calls an open brain — a database-backed AI accessible knowledge system that you own outright, with no SaaS middlemen that can break or reprice or disappear. One brain that every AI you use — Claude, ChatGPT, GPT, Cursor, whatever ships next month — can plug into via MCP. You can type a thought in Slack and five seconds later it's embedded, classified, and searchable by meaning from any AI tool you touch or any AI agent that wants to touch it.

The total cost, benchmarked, is roughly 10 to 30 cents a month. He's publishing a companion guide on Substack for the step-by-step. This video is about why the architecture of an agent readable system matters much more than the individual tools you choose, and why the memory problem we're talking about here is secretly the bottleneck in everything you're doing with AI today — and why people who solve it will have a compounding advantage that compounds every single week.

The Memory Problem Hiding Inside Your Prompting

If you've followed Jones's videos for a while, you know he keeps coming back to one idea. The quality of AI output depends entirely on the quality of your ability to specify. That's not a nice-to-have principle anymore. That is the whole game. He laid out the full framework he sees for prompting in 2026 in a video he did last week — from prompt craft through context engineering to intent engineering to specification engineering, that hierarchy is real.

And the people who are ten times more effective than their peers have built context infrastructure that does the heavy lifting on all of those pieces — the context engineering, the specification engineering — before they have to type a single prompt. And what Jones wants to talk about here is how you take that abstract skill set and turn it into a memory problem that gives you a leg up on everybody else.

In other words, if you're going to do context engineering, if you're going to do specification engineering, seriously, you need to invest in a memory system that is yours, that is agent readable, that makes calling and retrieving that context easier. The best prompt in the world cannot compensate for an AI that does not know what you've been working on, what you've already tried, what your constraints are, who the key people in your life are, or what you decided last Tuesday.

Memory is supposed to be a lock-in on chat GPT... Memory is engaging. Feeling known is engaging. It works. It's smart product strategy. But you're smart too, and you don't not have to go along with that product strategy.

And by the way, that constraint also applies when working with agents. They need that context, too. And right now, that's exactly what most of us are struggling with when it comes to AI. Every single time we open a new chat, we often start from zero. Every single time we switch from Claude to ChatGPT to Cursor, we tend to lose things — which is why we gravitate toward one of those systems more than another.

Think about how much of your prompting is asking AI to catch up on what you know already. The background here is you're burning up your best thinking on context transfer instead of real work. A Harvard Business Review study found that digital workers toggle between applications nearly 1,200 times a day. Every switch seems really small but collectively this is devastating our attention.

Jones has watched this context switching issue play out over and over in his own life and in the lives of others — and what he keeps coming back to is the insight that our desire to specify, to be clear with AI, is only getting higher, and it's demanding more of our memory systems. And our memory systems and memory structures are not keeping up.

Memory architecture determines agent capabilities much more than model selection does. That's widely misunderstood. And when you construct memory incorrectly, you're stuck re-explaining yourself forever or you're stuck in a world where you know how to access memory and the agent doesn't.

Critics might note that built-in memory features in Claude, ChatGPT, Grock, and Google are getting better all the time — solving exactly this problem without requiring any technical setup. But think about what they give you and what they don't. Claude's memory doesn't know what you told ChatGPT. ChatGPT's memory doesn't follow you into Cursor. Your phone app doesn't share context with your coding agent. Every platform has built a walled garden of memory, and none of them talk to each other.

There's a whole new category of products emerging in early 2026 specifically because platforms refuse to solve this — products like MemConnect and One Context. The problem is real enough to spawn an entire VC-backed industry. So what you've really got is multiple AI tools getting upgraded all the time, adding AI tools all the time to experiment with, and you have a thin siloed layer of context that only works inside each of those individual tools.

You know what? That's not really memory. That is five separate piles of sticky notes on five separate desks. And now let's add autonomous agents into the picture. The agent category has absolutely detonated in the last few weeks, but the use cases that are shining — like the guy who got thousands of dollars off a car purchase — they're shining because the agent has the ability to securely and safely access relevant memories, relevant context from the user. Whereas agents that just guess or have to fill in the dots because you aren't able to provide them secure access to all your systems, they're not going to be nearly as useful for you.

And whether we're talking about agents or we're talking about tools, the part that should bother you even more is that these systems that corporations are designing are all designed to create lock-in. Memory is supposed to be a lock-in on ChatGPT, ditto on other systems. So you've spent a long time building up history with a tool and now if you want to try the latest other model — let's say you're on ChatGPT and you want to try Gemini or Claude or another model — you lose all of that context, not because the new model is worse, but because your context is trapped in the old one.

And oh by the way, all of that memory in those individual tools, that is not agent readable. And so as we get to a world where autonomous agents are becoming more and more and more a thing, the big corporations are betting that if they can trap you with memory, you will only use their agents and they will get to keep you and your attention and your dollars forever.

But your knowledge should not be a hostage to any single platform. And for most of us right now, frankly, it is. And that's shaping our entire AI future. We don't necessarily have a free choice between tools right now because the product strategy of these large businesses is to keep you engaged, to keep you entertained.

One of the reasons why ChatGPT 4o was so mourned and so grieved was because it was an engagement-optimized model and people liked the engagement. It works. Ditto with memory. Memory is engaging. Feeling known is engaging. It works. It's smart product strategy. But you're smart too, and you don't have to go along with that product strategy.

And you might be thinking at this point, Jones made a video on second brain — I can just connect it to my OpenAI and I'm fine. Absolutely, you can try that. But you're going to run into a structural mismatch that most people haven't noticed. That explains why the current generation of note-taking tools needs a different, more structural memory layer underneath.

The internet right now is forking. There's the human web with fonts, with layouts, with what you're reading. And there's the agent web that's emerging with APIs, with structured data that's built for machine-to-machine readability. That fork is happening to your memory architectures and your notes as well. Your Notion workspace, for example, is built for human eyes — it's built for pages, for databases, for views, for toggles, for cover images. It's beautiful for you. It's useless for an AI agent that needs to search by meaning, not by folder structure.

Your Apple Notes are locked into an ecosystem. Your Evernote has a decade of accumulated clutter with no semantic structure. Your bookmarks are a graveyard of things you've meant to read. These tools were built for the human web back in the 2010s — they were designed for you to browse, to organize, to read. They were never designed fundamentally with the expectation that AI agents would query them. That got bolted on later, much more recently.

And the apps adding AI features today are mostly doing it as bolt-ons, like chat with your notes. Great. You have one AI that can kind of search one app. What about the other five tools you use every week? We're still in a world of separate sticky notes on separate desks. You've traded one silo for another.

Every second brain app has been reaching for something that required a different layer entirely — infrastructure built for the agent web, not the human web. And that's what Jones wants to focus on here. Because if you can build infrastructure for the agent web, you are suddenly in a position to make a lot more human-friendly decisions with how you plug into that infrastructure.

The infrastructure is yours. It's something your agent can plug into. It's something your chatbots can plug into, but you control and manage it. This frees you from having memory that only lives with one of these corporations and their clouds AI systems. You don't have to depend on ChatGPT memory anymore. It also frees you from having to depend on an individual SaaS company not changing a setting in order to keep your own second brain working.

And ultimately, as agents get better, it frees you from having to do as much manual work to retrain a second brain. And so this is essentially giving you a sense of how agents are unlocking and changing our perspective on memory and changing our perspective on prompting and changing what we need to be digital citizens. Just as we needed a personal computer to be digital citizens over the 1990s, over the 2000s, over the 2010s, we need our own memory architectures to be responsible AI citizens now.

But we haven't really had a way to do that. And until very recently, until the last few weeks, we haven't had AI agents that would make that really practical. Now we do, and now the world has moved, and it's time to talk about it.

What An "Open Brain" Actually Looks Like

So let's get specific. What Jones is proposing here: instead of storing your thoughts in an app designed for humans, you should store them in infrastructure designed for anything — a real database, vector embeddings that capture meaning, not just keywords, a standard protocol that any AI can speak. He's calling it open brain because the architecture is what matters and you should not be forced to choose any given model.

This is all possible because of MCP, the protocol shift he talked about briefly above. It started as Anthropic's open-source experiment in November of 2024, but it's since become the HTTP infrastructure of the AI age. It's the USB-C of AI. One protocol. Every AI, your data is yours. It stays in one place, but every tool that speaks MCP can read it.

At a high level, Jones doesn't want to make you go and click somewhere. Let him show you what this actually looks like. Your thoughts live in a Postgres database you control, not somebody else's proprietary format. This is the most boring, battle-tested technology you can imagine. Postgres is not exciting. It's not deprecating. Postgres isn't chasing a growth metric. Postgres isn't VC-backed and needing to hit a billion-dollar unicorn valuation. It's just a standard way of storing data. And you want that boringness because everything else needs to plug into it.

The nice thing about the database is that if you construct it properly, if you vectorize it, every thought you capture gets converted into a vector embedding — which means it's a mathematical representation of what it means that is immediately natively AI readable. So when you ask what was I thinking about career changes last month, it can find your note about how you were considering moving into consulting or how you were considering moving into product even if you never used the word career in the original thought.

That's called semantic search, and it's a whole different universe from keyword search. So what this looks like when you have Postgres hooked up with an MCP server is you can type into a Slack channel: "Hey, I was talking with Sarah. She mentioned she's thinking about leaving her job to start a consulting business. She's been really unhappy since the reorg." Five seconds later, the system has stored the raw text, generated a vector embedding of the meaning, extracted the metadata — the people, the topics, the type, the action items — and filed all of that in a real database.

Now any AI that you're working with can go see that. If you're in Claude working on a coaching framework: "Hey, search my brain for notes about people considering career transition." Found it. If I'm in ChatGPT drafting an email: same search, same result. If I'm in Cursor building a tool and I need to remember a decision I made last week, hit the MCP server, it's right there.

One brain, every AI persistent memory that never starts from zero — even if you start a new tool tomorrow and you've never touched it before. So this has two basic parts: capture runs through any tool you have open. You type a thought, it hits a Supabase edge function that generates an embedding and extracts the metadata in parallel and stores both in a Postgres database with PGVector, and it just replies in thread with a confirmation showing what it captured.

Bottom Line

Jones's core argument is compelling: the memory problem in AI tools is real, it's being deliberately exacerbated by corporations to create lock-in, and there's a practical low-cost solution that puts you back in control. His architecture — Postgres plus MCP — is technically sound and genuinely accessible. The vulnerability? Building this requires some technical setup, and for many users, the built-in memory features of modern AI tools may already solve their problems without any extra effort. The real question isn't whether an open brain works — it's whether you need one yet.

Your AI agent probably doesn't have a brain. And what I mean by that is it doesn't have a system that allows it to read and think through context that you have developed over months and years and reliably come back and be proactive with. I published a whole guide on the second brain last month. It was super popular.

A lot of people built it. A lot of people improved on it. You can use Zapier. You can use Notion.

You can use N8N. You can use an MCP server. You can use Obsidian. I have all of those pieces.

But what I don't have is the agent piece and that matters because in the intervening period in the last few weeks we are now at a point where agents are becoming mainstream. Anthropic is working on one. OpenAI hired Peter Steinberger the inventor of Open Claw. Open Claw itself passed 190,000 GitHub stars and spawned over one and a half million autonomous agents in just a couple of weeks.

We need a second brain system that is agent readable. And so what I'm going to lay out here today is the architecture for what I am calling an open brain. A databasebacked AI accessible knowledge system that you own outright with no SAS middlemen that can break or repric or disappear. One brain that every AI you use, Claude, Chat, GPT, Cursor, whatever ships next month, can plug into via MCP.

You can type a thought in Slack and five seconds later it's embedded. It's classified. It's searchable by meaning from any AI tool you touch or any AI agent that wants to touch it. The total cost and yes we've benchmarked this.

It's roughly 10 to 30 cents a month. I'm publishing a companion guide on the Substack to handle the step by step. This video is about why the architecture of an agent readable system matters much more than the individual tools you choose and why the memory problem we're talking about here is secretly the bottleneck in everything you're doing with AI today and why people who solve it for agents and themselves will have a compounding advantage that whitens every single week. So, first let's talk about the memory problem that is hiding inside your prompting.

If you've been following my videos for a while, you know I keep coming back to one idea. ...