← Back to Library

X-raying OpenAI’s unit economics

The Real Math Behind the AI Buildout

Azeem Azhar and a team of researchers just did something no one had bothered to attempt: they pieced together every public data point about OpenAI's finances and tried to answer the question that keeps the entire industry awake at night. Does any of this actually make money?

The answer, it turns out, is complicated — and more uncomfortable for the AI optimists than they'd like.

X-raying OpenAI’s unit economics

What the Numbers Actually Show

Azhar writes, "it seems likely that OpenAI during the past year, especially while operating GPT-5, was making more money than the cost of the compute — which is the primary expense of operating their product." Revenue exceeds the electricity bill. That's the good news.

The bad news follows immediately. After accounting for everything else — staff, sales, administrative overhead, and the twenty percent revenue share OpenAI pays Microsoft under a deal struck years before ChatGPT ever existed — the margins collapse to paper-thin. Or worse.

As Azhar puts it, the most shocking finding emerged from the R&D side: "if you look at how much they spent on R&D in the four months before they released GPT-5, that quantity was likely larger than what they made in gross profits during the entire tenure of GPT-5 and GPT-5.2." In other words, the cost of building the next model exceeds the total profit the current model ever generates. The treadmill is moving faster than the runner.

"What they're trying to do is convince investors that they have a business and research product worth scaling as much as possible, driven by the conviction that through scale, they'll unlock new capabilities which in turn will unlock new markets."

This is the engine driving fifty billion dollars in planned capital expenditure commitments across big tech in 2026. Not proven profitability. Investor conviction.

Models as Depreciating Assets

The conversation between Azhar, Jaime Sevilla, Hannah Zhang, and Matt Clifford surfaces an underappreciated dynamic: frontier models have shockingly short commercial lifespans. A model family stays preeminent for only a few months before the next generation renders it obsolete.

Consumers jump immediately to whatever is newest. Enterprises lag behind — they don't rewrite their entire API integration every time a new version drops. But the gap is closing.

Azhar notes that the uncertainty is structural: "to what extent do you actually learn and prepare for your next model based on the short life of the existing model?" The learning compounds in training data choices, reinforcement learning techniques, and operational knowledge from running models at massive scale. But even OpenAI probably can't quantify exactly how much of that learning is reusable.

This creates a business model that looks nothing like traditional software. The margins are lower. The asset depreciation is faster. The reinvestment cycle is brutal.

The Two Paths: Breadth Versus Focus

The research highlights a strategic divergence worth watching.

OpenAI is trying to capture everything simultaneously — sovereign governments through infrastructure partnerships, enterprise contracts, university deals, and the consumer market all at once. Azhar points out this approach "often flew foul of how Y Combinator — which Sam [Altman] used to run — would encourage founders to work: find a beachhead, stick to it, then grow."

Anthropic, by contrast, pursues a narrower strategy. They focus on business customers willing to spend substantial sums annually on agents — something priced beyond what individuals can afford but within range of what a company pays for a marginal employee. As Azhar observes, "you don't need the speculative investments in product to consumerise, or in the sales and marketing for mass awareness."

There's another drag on OpenAI's independence: the twenty percent Microsoft revenue share. Azhar calls it "a deal they did years ago, before ChatGPT, for distribution and compute" — and one that now actively undermines the economics of running a standalone business.

The Bottleneck Nobody's Talking About

When asked about scaling constraints, Sevilla makes a counterintuitive argument. Everyone points to energy as the bottleneck. But energy is solvable — building ten or a hundred gigawatts of capacity represents roughly a ten percent increase over installed US capacity, and similar expansions have happened before.

The real constraint is GPU manufacturing, concentrated in a handful of Taiwanese factories that have struggled to scale production meaningfully. "That's probably where the long-term bottleneck ends up," Sevilla says.

Azhar pushes further: the permitting backlogs and grid queues are self-imposed barriers, not physical laws. But supply chain dependencies — copper, optical fiber, specialized manufacturing — create genuine friction that no amount of ambition easily resolves.

Gross Margins Versus Net Reality

The findings that surprised the researchers most are telling.

Hannah Zhang notes she "was actually quite surprised the gross margins were where they were — around 50%. That's pretty good for a model where people are saying they're haemorrhaging money year after year." Half of revenue covers compute costs. Not bad — on the surface.

But the full picture tells a different story. As Sevilla concludes: "I came away with a more pessimistic view than I had... I was expecting to find resoundingly that they already have a profitable model when you look at it through the lifecycle — that gross profit completely offsets the cost of development. It doesn't."

What looks like profitability at the model level dissolves once you account for the full cost of building the next one.

Critics might note that this entire analysis relies on public data points and reasonable assumptions — the actual internal numbers could tell a different story entirely. The methodology, while rigorous, necessarily involves estimation and projection. Others might argue that judging AI economics by current unit costs ignores the possibility of sudden algorithmic breakthroughs that dramatically reduce inference costs. And the enterprise demand side, which Azhar argues has been underreported, could shift the math faster than skeptics expect if organizations begin treating AI agents as genuine workforce additions rather than experimental tools.

Bottom Line

OpenAI's gross margins prove that running frontier models isn't inherently unprofitable — but the R&D treadmill required to stay competitive swallows those gains before they can compound. The industry isn't a software business. It's something new, something still being defined, and the companies that win may be the ones willing to narrow their focus rather than the ones trying to own every market at once.

Deep Dives

Explore these related deep dives:

  • OpenAI

    The article analyzes OpenAI's unit economics and profitability

  • Anthropic

    The article discusses OpenAI vs Anthropic comparison and playbook

Sources

X-raying OpenAI’s unit economics

by Azeem Azhar · · Read full article

AI companies are being valued in the hundreds of billions. $650 billion in capital expenditure commitments are being made by big tech for 2026. Yet one question remains unanswered: does it make economic sense?

We recently partnered with Epoch AI to analyze GPT-5’s unit economics, and figure out whether frontier models can be profitable (full breakdown here).

To dig deeper into what our results tell us about the wider industry, we hosted a live conversation last week between myself, Hannah Petrovic, Jaime Sevilla, moderated by Matt Robinson.

We cover:

The research findings,

Possible paths to profitability,

OpenAI vs Anthropic playbook,

Winning the enterprise

Why this research made some bulls more pessimistic

What the market gets wrong.

Watch here:

Listen here:

Or read our notes:

What did you actually find?.

Matt: For someone just getting into the research, what’s the big takeaway — and how did you even think about building a framework to analyse a business like this?

Jaime: To our understanding, no one had really taken on this task of piecing together all the public information about the finances of OpenAI — or any large AI company — and trying to paint a picture of what their margins actually look like. So we did this hermeneutic exercise of hunting for every data point we could find and trying to make sense of it.

The two most important takeaways: first, it seems likely that OpenAI during the past year, especially while operating GPT-5, was making more money than the cost of the compute — which is the primary expense of operating their product. But they appear to have made a very thin margin, or even lost money, after accounting for all other operating expenses: staff, sales and marketing, administrative costs, and the revenue-sharing agreement with Microsoft.

Second — and this is the part I found quite shocking — if you look at how much they spent on R&D in the four months before they released GPT-5, that quantity was likely larger than what they made in gross profits during the entire tenure of GPT-5 and GPT-5.2.

Hannah: A lot of our methodology was based on numbers we could find historically, then trying to project what would happen through the rest of 2025. For example, we had data showing 2024 was $1 billion in sales and marketing, and H1 of 2025 was $2 billion. So we built the picture using constraints ...