The Trillion-Dollar Plumbing Problem Nobody Wants to Talk About
Nate B. Jones makes a provocative argument that cuts against the grain of the current AI hype cycle: while the tech world celebrates OpenClaw's 250,000 GitHub stars and demos of personal AI agents booking flights and managing calendars, almost nobody is discussing the enormous infrastructure overhaul required to make any of it actually work. The thesis is straightforward. Every agent-powered future that gets tweeted about depends on thousands of companies rebuilding their data stacks from the ground up, and most of them have not even started.
The fences that we spent 20 years building to keep bots out are now the things that are keeping our most valuable customers out.
That framing captures the central irony. An entire generation of software engineering was devoted to bot prevention: CAPTCHAs, gated APIs, JavaScript-heavy interfaces designed to thwart automated access. Jones argues that this architecture is now actively harmful, because the most valuable traffic a business will receive over the next three years will come from AI agents acting on behalf of human buyers.
The McKinsey Number and What It Actually Implies
The headline figure is striking: McKinsey projects that by 2030, the U.S. B2C retail market alone could see up to a trillion dollars in revenue orchestrated through AI agents. Jones suggests this might even be conservative. But the more interesting part of the argument is not the top-line projection. It is what has to happen underneath for that number to become real.
You cannot just do it and hope that it will work well. You cannot just do it and not change anything internally and just stick an API on.
Jones draws on his experience at Prime Video, where the team discovered that personalized customer experiences collapsed entirely when underlying data was not clean all the way down the stack. The same principle applies at vastly greater scale when agents, rather than humans, are the ones consuming that data. A human shopper will forgive a confusing product page. An agent will simply skip it and move on to a competitor whose data is legible.
Stripe and SAP: Two Ends of the Spectrum
The most useful section of the piece examines two concrete examples. Stripe, widely regarded as an early mover in the agent economy, shipped an MCP server that allows agents to look up customers, process refunds, and manage subscriptions. But Jones points out that wrapping Stripe's deeper analytics layer, Sigma, into an agent-readable format is a genuinely hard problem. Sigma queries can return massive CSVs that overwhelm an agent's context window, requiring an intermediary database layer with its own authentication and security considerations.
It is not as simple as just wrapping another API in an MCP. And I think it highlights the complexity even agent-leaning companies face when they think about how to make their data more agent readable.
On the other end sits SAP, where the gap between announcing an MCP server for one narrow product and making the sprawling SAP ecosystem truly agent-readable is, as Jones puts it, the Grand Canyon. For companies running typical SAP installations, achieving genuine agent readability is a multi-quarter initiative at minimum.
Four Misconceptions Worth Challenging
Jones identifies four common executive misconceptions about the agent-readable future, and they deserve scrutiny.
The first is that agent discovery will work like search engine optimization. Jones argues this is wrong because agents do not browse ranked lists influenced by ad spend and brand positioning. They evaluate structured data against explicit constraints. There is no "above the fold" for an agent. This is a compelling point, though it may underestimate how quickly a new form of agent-optimized marketing will emerge. If agents evaluate structured schemas, companies will inevitably find ways to game those schemas, just as they gamed search rankings.
The second misconception is that structured schemas only work for simple products. Jones pushes back hard, arguing that complex products actually benefit more from agent readability because their complexity is precisely what prevents customers from optimizing purchases today. The coffee sourcing example is vivid: origin farm, roasting method, processing technique, and social impact are all representable as structured data, even if they currently live only in marketing copy.
The third is that consumers will never trust agents to transact. Jones reframes trust as a spectrum rather than a binary, starting with research and comparison before gradually expanding to autonomous purchasing. This is probably the strongest counterargument in the piece. Agent commerce does not require a leap of faith; it requires a series of small, incremental trust-building interactions.
The fourth misconception, "we'll just wait and see," draws the sharpest language from Jones, who calls it signing a company's death warrant. The data cleanup work takes quarters, not weeks, and the competitive window is closing fast.
The Counterpoint: Incumbents Have Fought This Before
There is a reasonable counterargument that Jones acknowledges but perhaps underweights. Large incumbents like Google, Apple, and Amazon have strong incentives to resist agent readability because it disintermediates their customer relationships. Jones draws a Napster analogy, arguing that the paradigm will survive even if individual implementations get shut down. But the Napster-to-iTunes transition took nearly a decade and involved massive legal and regulatory battles. The agent-readable future may arrive slower and messier than Jones suggests, particularly if the largest platforms actively obstruct it.
Apple's moves to restrict vibe-coding apps and Google's quiet efforts to shut down OpenClaw bots suggest the resistance will be sustained and well-resourced. The question is whether startups and mid-market companies can build enough momentum to force the issue before incumbents find ways to co-opt or control the agent layer themselves.
The 80/20 Problem in Reverse
Perhaps the most underappreciated insight in the piece is what Jones calls the reverse 80/20 problem. Only about 20 percent of a product's meaningful attributes live in structured data. The other 80 percent, the tribal knowledge, the sourcing story, the social impact, the contextual relevance, lives in marketing copy, packaging, and the heads of employees. Making that knowledge agent-readable is an enormous undertaking that goes far beyond technical API work.
We have the problem where like 20% of our data for these products is represented in data structures... Well, 80% of the meaning around the product, the fact that this was a coffee that was processed by a small farmer and is supporting a local school in Ethiopia, all of that, that's in the marketing copy, that's not really in an agent readable format.
This is where the argument becomes most convincing. The shift to agent-readable commerce is not primarily a technology problem. It is a data curation problem, a knowledge management problem, and ultimately an organizational problem. Companies that have spent decades tolerating messy, incomplete product data because humans could fill in the gaps with intuition and forgiveness are facing a reckoning.
Bottom Line
Jones presents a compelling case that the AI agent revolution is less about the agents themselves and more about the unglamorous infrastructure work required to make them useful. The trillion-dollar McKinsey projection is attention-grabbing, but the real story is in the data plumbing. Companies that treat agent readability as a wrapper around existing APIs will find themselves invisible to the fastest-growing channel for customer interaction. The winners will be those that do the hard, multi-quarter work of making their entire data architecture, including the 80 percent of product meaning that currently lives outside structured systems, legible to machines. Whether this transition happens as quickly as Jones predicts is debatable, but the direction is not.