← Back to Library

Dario amodei made one mistake. Sam Altman got $110 billion. Here's the full story

Nate B Jones delivers a startling thesis: the AI industry's power dynamic didn't shift due to superior technology, but because Dario Amodei misread the political room while Sam Altman played the long game in the shadows. The author brings a level of operational granularity to the Pentagon's AI integration that mainstream coverage ignores, specifically detailing how Claude was embedded in active combat workflows during the Iran strikes. This isn't just about corporate rivalry; it is a forensic look at how AI has become a loadbearing component of modern warfare, and why refusing to be a commodity might be the most expensive mistake a tech CEO can make.

The Cost of Principled Defiance

The article opens with a jarring timeline that reframes the recent geopolitical chaos. Nate B Jones writes, "Last Friday night, while Anthropic CEO Dario Amade was still drafting his principled stand against the Pentagon, Sam Alman announced on X that OpenAI had signed a deal to deploy models in the classified networks of the Department of War." This juxtaposition is the piece's engine. The author argues that Amodei's refusal to allow unrestricted military use was interpreted not as a moral stance, but as a supply chain vulnerability. The Pentagon, facing active conflict, could not afford ambiguity. As Jones notes, "Daario Amade misread the room. He played a principled hand at the wrong table at the wrong time with the wrong counterparties and the result will reshape power dynamics across the AI industry for the next 18 months."

Dario amodei made one mistake. Sam Altman got $110 billion. Here's the full story

The evidence provided is chillingly specific. The author details how US Central Command utilized Claude for target identification and combat simulations during the strikes on Iran, even after a presidential order to stop using the technology. "The model was simply too deeply embedded in operational workflows to rip out in real time," Jones observes. This suggests that in high-stakes environments, the inertia of deployment outweighs policy directives. Critics might argue that the author conflates the Pentagon's operational desperation with a valid strategic endorsement of Anthropic's capabilities, but the narrative holds weight: once a tool is woven into the kill chain, removing it becomes a tactical impossibility.

The ambiguity is the point here and I will call out we have no idea what technical safeguards actually exist on these models in classified environments.

The Technical vs. The Political

A crucial distinction Jones makes is that Amodei's objection was likely technical, not ethical. The author points to Amodei's February 26th statement where he admitted that "Even fully autonomous weapons may prove critical for our national defense." This quote undermines the popular narrative that Anthropic is the anti-war conscience of the industry. Instead, Jones argues, "Amade is openly signaling that his positions on autonomous weapons are contingent and they are timelmited, a function of model capability, which we all know is accelerating rapidly, not necessarily ethics." The author suggests that Amodei was asking for "human in the loop" oversight, a requirement already codified in Department of War directive 3000.09. By demanding this in public, he forced the Pentagon to choose between a contract that limited their flexibility or a company they could designate as a risk.

The comparison to Google's Project Maven in 2018 is apt. Just as Google claimed their object detection software was non-offensive to avoid backlash, the author notes that "the AI component that processes sensor data that identifies targets and recommends engagement decisions. Is that meaningfully separable from the weapon that fires?" The answer, Jones implies, depends entirely on who is interpreting the contract. OpenAI's strategy was to avoid this public friction entirely. "Sam went private," the author writes, contrasting Altman's behind-the-scenes negotiation with Amodei's blog post. The defense establishment, Jones argues, "tends to reward deference and punishes public defiance."

The $110 Billion Flywheel

The financial fallout of this strategic divergence is staggering. The article details a $110 billion funding round for OpenAI, the largest private financing in history. Nate B Jones contextualizes this scale effectively: "For context, total US venture capital investment across every startup in 2023 was just $170 billion. OpenAI raised 65% of that in a single transaction." The round is not just cash; it is a structural realignment of the entire AI ecosystem. Amazon committed $50 billion, Nvidia $30 billion, and SoftBank another $30 billion. The author describes this as a "circular financing situation" where Nvidia invests in OpenAI, which buys Nvidia chips, which books revenue for Nvidia, and so on.

The infrastructure implications are equally massive. The Stargate project, a joint venture with SoftBank, Oracle, and MGX, targets 10 gigawatts of capacity by 2029. "For perspective, that is the electricity consumption of a midsized country," Jones writes. This massive buildout is designed to lock in OpenAI as the gravitational center of American AI infrastructure. While Microsoft chose not to participate in the new round, trading exclusivity for a long-term revenue share, the author notes that "Microsoft essentially traded exclusivity for a long-term tax on OpenAI's growth." This leaves OpenAI with a capital structure that is difficult to challenge, even if end-user demand does not materialize at the projected scale.

If you want to get a sense of how this works in public on public models, an Israeli product manager named Yonathan Bach built strike radar, a real-time dashboard calculating the likelihood of US strikes on Iran using claude to write the entire system in 6 hours with no coding background.

The Anthropic Counter-Narrative

Despite the narrative of total defeat, the author offers a nuanced view of Anthropic's position. Amazon, having invested $8 billion in Anthropic, has no real horse in the model race; they have a cloud to sell. "Amazon has no real horse in the model race. It has a cloud to sell and is selling to everybody," Jones states. Anthropic's AWS revenue is projected to hit $25 billion by 2027, and they have built Project Reneer, an $11 billion data center. The author suggests that while OpenAI won the immediate political battle, Anthropic may still be financially robust. However, the strategic loss is clear: "Anthropic was designated a supply chain risk to national security, an action never before taken against an American company." This designation could limit their ability to partner with other government entities or critical infrastructure providers in the future.

The author also highlights the danger of assuming model equivalence. "It just isn't like that," Jones writes regarding the Pentagon's assumption that all American hyperscalers produce models that are commodities. The performance gap between models in classified environments is real, and replacing Claude with OpenAI or XAI models in active combat operations is not a simple swap. "Military officials say fully replacing cloud is going to take a while and that assumes the replacements perform at the same level," the author notes. This technical reality is the one variable that could disrupt the Pentagon's current strategy.

Bottom Line

Nate B Jones's strongest argument is the reframing of the Anthropic-OpenAI conflict from a moral debate to a game-theoretic failure of timing and tone. The evidence of Claude's deep integration into the Iran strikes provides a terrifyingly concrete example of why the Pentagon cannot afford to wait for perfect safety guarantees. The piece's biggest vulnerability is its reliance on unverified claims about classified operations, though the author's sourcing from the Wall Street Journal and Bloomberg lends significant credibility. Readers should watch for how the Stargate infrastructure buildout impacts energy grids and whether the projected $280 billion revenue by 2030 can actually materialize to justify this unprecedented capital expenditure.

Deep Dives

Explore these related deep dives:

  • Atomic Habits Amazon · Better World Books by James Clear

    Small changes, remarkable results — the science of habit formation.

  • OpenAI

    The article discusses Sam Altman's deal with the Department of War and the largest private funding round

  • Anthropic

    The article discusses Dario Amodei's principled stand against the Pentagon and Claude being used for combat operations

Sources

Dario amodei made one mistake. Sam Altman got $110 billion. Here's the full story

by Nate B Jones · Nate B Jones · Watch video

Last Friday night, while Anthropic CEO Dario Amade was still drafting his principled stand against the Pentagon, Sam Alman announced on X that OpenAI had signed a deal to deploy models in the classified networks of the Department of War. Hours later, the United States and Israel began bombing Iran. By Saturday morning, Claude was the number one app in the app store and Anthropic was designated a supply chain risk to national security, an action never before taken against an American company. These events are connected by a logic that most commentary has missed.

Daario Amade misread the room. He played a principled hand at the wrong table at the wrong time with the wrong counterparties and the result will reshape power dynamics across the AI industry for the next 18 months. Sam Alman played a quieter game and he ended up walking away with the largest private funding round in history, a fat defense contract, and the structural position to make Open AI the gravitational center of American AI infrastructure. This represents an enormous reversal of fortune since up until the last couple of weeks, it has been evident that private companies are choosing Anthropic overwhelmingly as they begin to sign enterprise contracts and scale up their AI footprints.

We know this because anthropic enterprise revenue has been growing sharply month over month, even as OpenAI's enterprise footprint has not been as sticky and Open AI's penetration rate for enterprise has been declining very, very slowly over the past couple of quarters. It may not be that way anymore. Let's start with Iran and have an honest discussion about how AI is being used in modern combat operations. The Wall Street Journal reported that US Central Command used Claude for intelligence assessments, target identification, and combat simulations during the strikes against Iran just hours after the president ordered all federal agencies to stop using anthropic technology.

Yes, after you heard that right, the strikes were a joint US-Israeli operation. They killed the Ayatollah at a government compound in Tehran. And the model was simply too deeply embedded in operational workflows to rip out in real time. even after a presidential order.

This actually isn't new. Claude had been used in the January operation to capture Nicolas Maduro in Venezuela and was deployed through Anthropic's partnership with Palanteer on Amazon's top secret cloud. The Pentagon had awarded contracts worth ...