← Back to Library

Inside OpenAI's unit economics

Azeem Azhar strips away the hype surrounding artificial intelligence valuations to ask a brutal, forensic question: do the numbers actually add up? By treating OpenAI's latest flagship model as a case study in financial decay, Azhar reveals that the industry's most celebrated assets may be losing money the moment they are deployed. This isn't just accounting; it's a stress test for a trillion-dollar bet on automation.

The Illusion of Margins

The piece begins by dismantling the assumption that high revenue equals a healthy business. Azhar writes, "What looks like a simple margin calculation is closer to a forensic exercise: we triangulate reported details, leaks, and Sam Altman's own words to bracket plausible revenues and costs." This approach is vital because it refuses to take the glossy press releases of tech giants at face value. Instead of accepting the narrative that AI is a goldmine, the analysis digs into the raw data of compute costs and staff compensation.

Inside OpenAI's unit economics

The core finding is startling. While the gross margin—the money left after paying for the electricity and hardware to run the models—looks respectable at around 50%, the picture darkens immediately when you include the human and operational overhead. Azhar notes, "If you also subtract other operating costs, including salaries and marketing, then OpenAI most likely made a loss, even without including R&D." This distinction is critical for investors who might be seduced by top-line revenue growth. It suggests that the current business model is burning cash to maintain market share, not generating it.

"Even an unprofitable model demonstrates progress, which attracts customers and helps labs raise money to train future models — and that next generation may earn far more."

Azhar acknowledges that this loss-making cycle is standard in high-growth tech, comparing the situation to Uber's long road to profitability. However, the speed of depreciation in AI changes the math. The analysis points out that "developing and running AI models is loss-making" because the window to monetize a model before it becomes obsolete is shrinking. Critics might argue that this is a temporary phase of infrastructure build-out, but the data suggests the burn rate is accelerating faster than revenue can catch up.

The Race Against Obsolescence

The most compelling part of Azhar's argument is the concept of the "depreciating infrastructure" problem. In traditional software, a product can generate revenue for years. In AI, a model's economic life is measured in months. Azhar explains, "So if GPT-5 is at all representative, then at least for now, developing and running AI models is loss-making." The analysis estimates that OpenAI spent more on research and development in the four months leading up to a model's launch than it earned in gross profit during that model's entire four-month lifespan.

This creates a terrifying feedback loop. To stay ahead, companies must spend billions on the next model before the current one has paid for itself. Azhar writes, "In practice, model tenures might indeed be too short to recoup R&D costs." This reframes the competition not as a race to the best technology, but a race to the most efficient capital allocation. If a competitor releases a slightly better model in three months, the previous investment becomes stranded capital.

"Frontier models are like rapidly-depreciating infrastructure: their value must be extracted before competitors or successors render them obsolete."

The analysis highlights that external competition, such as rival models from Google or Anthropic, forces this acceleration. This isn't just about innovation; it's about survival. The pressure to release new models constantly means that the "state-of-the-art" is often challenged within months, making it incredibly difficult to build a sustainable profit margin on any single iteration.

The Path to Profitability

Despite the grim short-term numbers, Azhar does not declare the industry bankrupt. Instead, the commentary shifts to the long-term thesis. The argument rests on the belief that if AI can truly automate a significant portion of human labor, the total addressable market is so vast that even thin margins will yield trillions. Azhar posits, "Many higher-ups at AI companies expect AI systems to outcompete humans across virtually all economically valuable tasks. If you truly believe that in your heart of hearts, that means potentially capturing trillions of dollars from labor automation."

However, the path there is fraught with uncertainty. The analysis suggests that profitability might come from diversification—ads, enterprise contracts, and algorithmic efficiency—rather than just selling compute. Azhar notes, "Algorithmic innovations mean that running models could get many times cheaper each year, and possibly much faster." This is the industry's best hope: that the cost of intelligence drops faster than the cost of developing it.

"Compute margins are falling, enterprise deals are stickier, and models can stay relevant longer than the GPT-5 cycle suggests."

A counterargument worth considering is whether the market can sustain the capital expenditure required to keep this cycle going if revenue growth slows. If the "intelligence explosion" is delayed, the cash burn could become unsustainable before the breakthrough arrives. Yet, Azhar's analysis remains grounded in the idea that the trend is moving toward efficiency, even if the current snapshot looks like a loss.

Bottom Line

Azeem Azhar's forensic breakdown exposes a fragile reality: the AI boom is currently running on a deficit, fueled by the hope that future efficiency will outpace current costs. The strongest part of the argument is the identification of model depreciation as a fundamental economic hurdle, not just a technical challenge. The biggest vulnerability remains the assumption that the market will tolerate infinite losses in exchange for a future that may take years to materialize. Investors and observers should watch closely to see if the gap between compute costs and revenue can actually close before the capital runs out.

Sources

Inside OpenAI's unit economics

by Azeem Azhar · · Read full article

AI companies are being priced into the hundreds of billions. That forces one awkward question to the front: do the unit economics actually work?

Jevons’ paradox suggests that as tokens get cheaper, demand explodes. You’ve likely felt some version of this in the last year. But as usage grows, are these models actually profitable to run?

In our collaboration with Epoch AI, we tackle that question using OpenAI’s GPT-5 as the case study. What looks like a simple margin calculation is closer to a forensic exercise: we triangulate reported details, leaks, and Sam Altman’s own words to bracket plausible revenues and costs.

Here’s the breakdown.

— Azeem

Can AI companies become profitable?.

Lessons from GPT-5’s economics.

Originally published on Epoch AI’s blog. Analysis by Jaime Sevilla, Exponential View’s Hannah Petrovic, and Anson Ho

Are AI models profitable? If you ask Sam Altman and Dario Amodei, the answer seems to be yes — it just doesn’t appear that way on the surface.

Here’s the idea: running each AI model generates enough revenue to cover its own R&D costs. But that surplus gets outweighed by the costs of developing the next big model. So, despite making money on each model, companies can lose money each year.

This is big if true. In fast-growing tech sectors, investors typically accept losses today in exchange for big profits down the line. So if AI models are already covering their own costs, that would paint a healthy financial outlook for AI companies.

But we can’t take Altman and Amodei at their word — you’d expect CEOs to paint a rosy picture of their company’s finances. And even if they’re right, we don’t know just how profitable models are.

To shed light on this, we looked into a notable case study: using public reporting on OpenAI’s finances,1 we made an educated guess on the profits from running GPT-5, and whether that was enough to recoup its R&D costs. Here’s what we found:

Whether OpenAI was profitable to run depends on which profit margin you’re talking about. If we subtract the cost of compute from revenue to calculate the gross margin (on an accounting basis),2 it seems to be about 50% — lower than the norm for software companies (where 60-80% is typical) but still higher than many industries.

But if you also subtract other operating costs, including salaries and marketing, then OpenAI most likely made a loss, ...