← Back to Library

Dario Amodei Made One Mistake. Sam Altman Got $110 Billion. Here's the Full Story.

{"output": "Here's what happens when you turn down the Pentagon: Dario Amodei's Anthropic gets labeled a supply chain risk to national security. Sam Altman's OpenAI walks away with $110 billion and the most significant defense contract in AI history. The story no one is telling is how two companies took opposite paths — and why the difference between public defiance and quiet deference just reshaped the entire industry.

Iran Strikes and the New Warfare

The US and Israel began bombing Iran on Friday night, hours after Sam Altman announced OpenAI had signed a deal to deploy models in classified Pentagon networks. The same evening, while Anthropic CEO Dario Amodei was drafting his principled stand against military use, Sam Altman's deal went live.

The timing wasn't coincidental. The Wall Street Journal reported that US Central Command used Claude — Anthropic's flagship model — for intelligence assessments, target identification, and combat simulations during the strikes against Iran. This happened after a presidential order to federal agencies to stop using Anthropic technology. The strikes killed an Iranian official at a government compound in Tehran.

The model was too deeply embedded to rip out in real time. Even after a presidential order, the Pentagon couldn't phase out a tool running inside active combat operations.

This wasn't new. Claude had been used in January's operation to capture Nicolas Maduro in Venezuela, deployed through Anthropic's partnership with Palantir on Amazon's top-secret cloud. The Pentagon had awarded contracts worth up to $200 million through its chief digital and AI office for frontier AI prototyping. That classified deployment ran almost exclusively on Claude.

Israeli product manager Yonathan Bach built StrikeRadar — a real-time dashboard calculating the likelihood of US strikes on Iran using Claude to write the entire system in six hours with no coding background. The tool pulls flight cancellation data, military aircraft movements, and news feeds and computes a probability score. That is the capability envelope we're operating in.

AI models are no longer supplemental to intelligence operations. Just as they're reshaping work, they're reshaping how intelligence and combat occurs. They compress the observe-orient-decide-act loop so that real-time reactions are feasible where they previously took days of analyst work.

The Amodei vs Altman Showdown

After the Maduro raid, an Anthropic employee reportedly asked a Palantir counterpart how Claude was used in that raid. That seemingly innocent inquiry set off a chain of events still unfolding.

Defense Secretary Peter Hegseth gave Anthropic a deadline last week: either allow unrestricted use for any lawful military purpose or be designated a supply chain risk. Anthropic refused. Hegseth designated the company as a supply chain risk and announced a six-month phase-out.

What the Iran strikes proved is that you cannot phase out a tool running inside active combat operations, no matter how much you want to. The Pentagon has since reached deals with OpenAI and xAI for classified settings, but military officials say fully replacing Claude will take a while — and assumes the replacements perform at the same level.

Anyone who has used xAI, OpenAI, and Anthropic will tell you these are very different models. You should not assume equivalent performance on equivalent missions. The Pentagon's assumption that all American hyperscalers produce models that are one-to-one equivalents is one of its larger mistakes.

Now here's what most coverage missed: Dario Amodei's objection isn't moral. It's technical. He's saying the models aren't reliable enough yet. In his February 26th statement, Amodei wrote that he believes deeply in the existential importance of using artificial intelligence to defend the United States. He said Anthropic supports between 98 and 99% of Pentagon use cases and that partially autonomous weapons are vital to the defense of democracy.

Then he said something significant: "Even fully autonomous weapons may prove critical for our national defense."

Amodei's objection is contingent and time-limited — a function of model capability, which we all know is accelerating rapidly. When the models get reliable enough, the red line can move. Anthropic's demand for human-in-the-loop oversight already exists. It's codified in Department of War Directive 3000.09. Anthropic was actually asking for what the law already provides.

The Pentagon likely refused to write it into the contract because doing so would potentially give Anthropic a legal basis to object to specific deployments. The department wanted to retain as much flexibility as it could. The ambiguity is the point.

One telling example: OpenAI's deal includes three stated red lines — no mass domestic surveillance, no autonomous weapons, no high stakes automated decisions like social credit systems. Nearly identical to what Amodei and Anthropic were asking for. The differences are in implementation.

OpenAI's deployment architecture being cloud-only with OpenAI engineers embedded alongside the Pentagon physically prevents models from being integrated into weapons hardware. OpenAI staff were told that the government agreed to let OpenAI build its own safety stack.

Amodei chose to go public when wrestling with the department. Sam went private. The defense establishment tends to reward deference and punishes public defiance. You negotiate behind closed doors, you don't embarrass your counterparty, you don't publish blog posts implying the Pentagon might deploy unreliable autonomous weapons.

Whether Amodei was right on the substance is entirely different from whether he played the game well. In terms of game theory, he absolutely lost.

The Money Story

On the same day as the Pentagon deal, OpenAI announced $110 billion funding round — the largest private financing in history. The round values OpenAI at $730 billion pre-money, $840 billion post-money. For context, total US venture capital investment across every startup in 2023 was just $170 billion. OpenAI raised 65% of that in a single transaction.

Who funded this?

Amazon committed $50 billion, though only $15 billion is unconditional. The remaining $35 billion is contingent on an IPO or an undisclosed AGI milestone. Alongside equity, the companies expanded their cloud agreement by $100 billion over eight years — which OpenAI certainly needed. AWS becomes the exclusive third-party distributor for OpenAI Frontier, their enterprise agent platform. That must be a loss for Microsoft.

They're also co-developing a stateful runtime environment on Bedrock, a persistent context layer for AI agents to maintain memory across sessions. This is Andy Jasse coming for Google Cloud and coming for Azure. Amazon invests equity to guarantee a cloud contract. The cloud contract locks in Tranium chip consumption and chip consumption funds Amazon's long-term silicon ecosystem bet.

Nvidia invested $30 billion replacing an earlier letter of intent that ultimately collapsed in January. $30 billion will be paired with the deployment of Nvidia's Vera Rubin architecture at scale — 3 gigawatts of inference capacity and 2 gigawatts for training. If OpenAI is going to become the center of gravity for AI in the United States, Nvidia needs to be a player here.

SoftBank committed $30 billion, bringing total OpenAI investment to $64.6 billion for SoftBank. As chairman of the $500 billion Stargate project and majority owner of ARM, Masayoshi Son is assembling an integrated stack from chip IP to data centers to frontier models. S&P has warned the pace of investment here is actually pressuring SoftBank's creditworthiness.

Notably absent: Microsoft. The company that invested $13 billion and holds a 27% stake chose not to participate. Microsoft renegotiated its partnership in late 2025, removing its right of first refusal on new cloud workloads — hence the AWS deal — but putting in place a commitment from OpenAI to pay a 20% revenue share through 2032. Microsoft essentially traded exclusivity for a long-term tax on OpenAI's growth.

The Infrastructure Buildout

Stargate, a joint venture with SoftBank, Oracle, and Abu Dhabi's MGX, targets half a trillion dollars in AI infrastructure spend and 10 gigawatts of capacity by 2029. Phase 1 delivered over 200 megawatts in September 2025. Phase 2 is expected to deliver roughly another gigawatt in mid-2026.

Oracle will deploy over 450,000 GB200 GPUs under a 15-year lease. Beyond Eden sites are announced or under construction as part of the Stargate project in New Mexico, Michigan, Wisconsin, and additional Texas locations. Total plan capacity exceeds 8 gigawatts with more than $450 billion committed.

Separately, OpenAI has a deal with Broadcom for 10 gigawatts of custom inference chips Titan and is committed to 6 gigawatts of AMD Mi series GPUs and also retains the Nvidia partnership at 10 gigawatts. In total, we're looking at roughly 26 gigawatts across three chip architectures plus custom silicon. For perspective, that is the electricity consumption of a mid-sized country.

Cloud commitments are equally staggering: $250 billion to Microsoft Azure through 2032, $38 billion to AWS, and roughly $300 billion to Oracle. Combined, nearly $700 billion in cloud commitments.

Against this spending, OpenAI's annualized revenue topped $20 billion by late 2025. Projections show $100 billion by 2029 and $280 billion by 2030. Losses are projected at $14 billion this year with cumulative losses topping $44 billion and profitability not reached until 2029. HSBC estimates a $20-plus billion funding shortfall even after everything committed so far.

An IPO at a valuation approaching $1 trillion is planned to close that gap — either late this year or next year.

Counterargument

Critics might note that framing this as a simple game of public defiance versus private deference misses the deeper question: should any company be in a position where refusing a Pentagon contract means being designated a supply chain risk? The power imbalance here isn't about negotiation strategy — it's about whether the defense establishment's leverage over American companies is itself the problem worth examining.

Bottom Line

The story underneath this story is that AI has become loadbearing infrastructure for modern warfare, and whoever controls the relationship with the Pentagon now controls the industry's center of gravity. Sam Altman's quiet deal-making just bought him $110 billion and a structural position as the gravitational center of American AI. Dario Amodei's public principled stand earned him a supply chain risk designation. The lesson isn't about ethics — it's about leverage. And in Washington, public defiance is a liability.}

Last Friday night, while Anthropic CEO Dario Amade was still drafting his principled stand against the Pentagon, Sam Alman announced on X that OpenAI had signed a deal to deploy models in the classified networks of the Department of War. Hours later, the United States and Israel began bombing Iran. By Saturday morning, Claude was the number one app in the app store and Anthropic was designated a supply chain risk to national security, an action never before taken against an American company. These events are connected by a logic that most commentary has missed.

Daario Amade misread the room. He played a principled hand at the wrong table at the wrong time with the wrong counterparties and the result will reshape power dynamics across the AI industry for the next 18 months. Sam Alman played a quieter game and he ended up walking away with the largest private funding round in history, a fat defense contract, and the structural position to make Open AI the gravitational center of American AI infrastructure. This represents an enormous reversal of fortune since up until the last couple of weeks, it has been evident that private companies are choosing Anthropic overwhelmingly as they begin to sign enterprise contracts and scale up their AI footprints.

We know this because anthropic enterprise revenue has been growing sharply month over month, even as OpenAI's enterprise footprint has not been as sticky and Open AI's penetration rate for enterprise has been declining very, very slowly over the past couple of quarters. It may not be that way anymore. Let's start with Iran and have an honest discussion about how AI is being used in modern combat operations. The Wall Street Journal reported that US Central Command used Claude for intelligence assessments, target identification, and combat simulations during the strikes against Iran just hours after the president ordered all federal agencies to stop using anthropic technology.

Yes, after you heard that right, the strikes were a joint US-Israeli operation. They killed the Ayatollah at a government compound in Tehran. And the model was simply too deeply embedded in operational workflows to rip out in real time. even after a presidential order.

This actually isn't new. Claude had been used in the January operation to capture Nicolas Maduro in Venezuela and was deployed through Anthropic's partnership with Palanteer on Amazon's top secret cloud. The Pentagon had awarded contracts worth ...