Packy McCormick cuts through the noise of the AI investment frenzy by dismantling the most seductive, yet dangerous, narrative in modern tech: the idea that burning cash automatically signals a future monopoly. He argues that while the "Amazon playbook" is often cited to justify massive losses in frontier AI labs, the analogy collapses under the weight of specific economic realities that investors are too busy to scrutinize. This is not just a critique of hype; it is a necessary stress test on whether the current capital expenditure binge is building a moat or merely subsidizing a race to the bottom.
The Amazon Mirage
McCormick begins by identifying the core fallacy driving current valuations. He recounts a common internet exchange where critics point out that OpenAI has no profitable business lines, only to be met with the retort, "You could have said the same thing about Amazon!" McCormick rejects this shortcut immediately. "Amazon's success has done a great deal of harm to a great number of companies," he writes, noting that while the retail giant's long-term vision was sound, the specific mechanics of its growth are often misunderstood by those trying to replicate them.
The author's analysis hinges on the distinction between strategic loss and structural inefficiency. He points to Jeff Bezos's 1997 letter to shareholders, where the founder explicitly stated, "When forced to choose between optimizing the appearance of our GAAP accounting and maximizing the present value of future cash flows, we'll take the cash flows." McCormick explains that Bezos's strategy was not merely about spending money; it was about leveraging a "negative working capital engine" where growth generated cash, which funded infrastructure, which in turn drove lower prices and more growth.
"Bezos wasn't afraid of spending money today for cashflow tomorrow, why should Neumann be? But we often put in offers on the same spaces as WeWork, and we had these super finely tuned underwriting models, and looking at our underwriting models versus the prices they paid to outbid us on certain spaces it was clear that no matter how optimistic your monthly revenue projections, there was just no way they were going to make money on each space."
This comparison to WeWork is the piece's most potent warning. McCormick, drawing on his own six years in the proptech industry, illustrates that analogies can be fatal if the underlying unit economics don't align. He notes that Adam Neumann admitted to letting economics get away from him, yet investors often justified the losses by repeating, "It takes money to make money." The result was a bankruptcy that could have been avoided with a clearer view of the capital structure.
Critics might argue that the AI sector is fundamentally different from real estate because software scales with near-zero marginal cost, unlike physical spaces. However, McCormick counters that the current AI model involves high variable costs for every token generated, making the "Amazon" comparison even more tenuous than the WeWork one.
The Uber Exception and the Token Trap
The commentary shifts to examine where the analogy actually works. McCormick concedes that Uber lost significantly more money than WeWork, burning $9.1 billion in 2022 alone, yet emerged as a dominant, profitable entity. "If you'd looked at Uber and said, 'Hey, Amazon lost money too, let me take a closer look,' and then realized that Uber was burning money in actually kind of the same way that Amazon was, to get its product as close to customers as possible in order to improve the customer experience," he suggests, one could have made a smart investment.
The key differentiator, he argues, was that Uber's spending directly improved the customer experience and created a network effect that competitors couldn't match. In contrast, the current AI landscape is crowded with labs "praying at the same altar, the Altar of Scaling Laws." McCormick warns against the "lines of code maxxing" mentality, where the industry confuses activity with value. "You could look at a successful software company, see that they'd written a lot of code, and lazily analogize: LOCs are correlated with success, lines of code are themselves a goal," he writes, invoking Goodhart's Law to suggest that optimizing for token generation may be a trap.
"At some point, when you've gone too deep, the ghost of Jeff Bezos appears and asks you what any of these tokens you're maxxxxing has done for your customers."
The author highlights that unlike Amazon, which had no real competition in its specific strategy of logistics and distribution, AI labs face "multidenominational" competition where everyone is fighting with the same weapons. While Anthropic and OpenAI are growing revenue at unprecedented rates—OpenAI claiming $2 billion in monthly revenue—the variable cost structure remains a vulnerability. Each query costs money, and unlike Amazon's fixed-cost infrastructure, the marginal cost of AI services does not necessarily drop as quickly as volume increases.
The Bottom Line
McCormick's strongest contribution is his refusal to accept "cash burn" as a proxy for "future monopoly," forcing investors to look past the seductive Amazon narrative and examine the actual unit economics of AI. The argument's biggest vulnerability is the uncertainty of the technology itself; if a breakthrough in model efficiency or a new application layer emerges that drastically lowers costs, the current high burn rate could be vindicated as a necessary bridge. Readers should watch for whether the major labs can transition from a variable-cost model to a true fixed-cost infrastructure, a shift that would finally make the Amazon analogy valid.