← Back to Library

AI just entered its Manhattan project era

The Pentagon Picked a Fight With Anthropic and Got a Civics Lesson

In July 2025, Anthropic signed a $200 million contract with the Pentagon, becoming the first AI lab deployed on the Department of Defense's classified network. By February 2026, that contract was dead, Anthropic was designated a "supply chain risk to national security," and the president of the United States had ordered every federal agency to stop using the company's technology. Alberto Romero, writing from Europe for The Algorithmic Bridge, walks through what happened, what it means, and why most of the commentary misses the bigger picture.

Romero structures his analysis as a three-act play, and the framing works. Act one is a meticulous timeline. Act two argues the immediate fallout matters less than people think. Act three is the real thesis: AI just stopped being a technology story and became a power story.

AI just entered its Manhattan project era

The Timeline That Matters

The core facts are damning in their simplicity. Anthropic's original contract included two restrictions: no mass surveillance of American citizens, no fully autonomous weapons without human oversight. Defense Secretary Pete Hegseth issued a memo directing all DoD AI contracts to adopt "any lawful use" language, which collided directly with those restrictions. When Dario Amodei refused to budge, the response was swift and disproportionate.

A senior defense official says they're "going to make sure they pay a price."

The supply chain risk designation -- previously reserved for foreign adversaries like Huawei -- was applied to an American company for the sin of insisting on contractual guardrails. Romero captures the absurdity well.

Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military.

That quote from Hegseth reframes a private company's contractual terms as an act of insurrection. It is a remarkable piece of rhetoric, and Romero is right to highlight it.

The Altman Maneuver

The most revealing subplot is what Sam Altman did during the crisis. On Thursday, Altman publicly backed Anthropic's position -- telling CNBC that OpenAI shared the same red lines on autonomous weapons and mass surveillance. Then the New York Times reported what had actually been happening behind the scenes.

Mr. Altman engaged in talks with the Pentagon, starting on Wednesday, over a deal for its technology... agreeing to the use of OpenAI's technology for all lawful purposes.

Romero threads the needle carefully here, noting the confusion over whether OpenAI's deal actually contains the same restrictions Anthropic was blacklisted for demanding. His assessment is characteristically blunt.

Altman framed it as such because that's what he does for a living, in and out of OpenAI: play with words.

This is the kind of observation that benefits from Romero's European distance. He has no institutional loyalty to either company, and it shows. The portrait of Altman that emerges is not of a villain exactly, but of someone whose public statements and private actions occupy different time zones.

Why Romero Thinks Nothing Changes

The contrarian heart of the piece is Act Two, where Romero argues the immediate consequences are a rounding error. His case is financial first: Anthropic has a run-rate revenue of $14 billion; the DoD contract was $200 million. The math speaks for itself.

He is equally skeptical of the QuitGPT boycott working in Anthropic's favor.

If two million users download Claude to attack ChatGPT and use it casually without paying, that's a toll on Anthropic rather than a blessing. Anthropic was happy to have a high percentage of high-paying consumers in an overall much smaller user base.

This is a sharp insight that most coverage missed. The viral sympathy wave might actually cost Anthropic money if it floods their infrastructure with free-tier users who never convert. Romero understands that in the AI business, not all users are created equal.

There is a counterpoint worth raising, though. Romero may underestimate the chilling effect on other government contracts, both domestic and international. The $200 million is indeed negligible, but the signal sent to every other government considering an Anthropic deal is not. Allied nations watching the supply chain risk designation might hesitate, and that second-order effect could dwarf the direct revenue loss.

From Techne to Politeia

Act Three is where the essay earns its title. Romero argues that the real update is not about who won or lost, but about a permanent shift in the nature of AI development.

AI is no longer about the art of making models or nerds honing their tuning craft, but about the arena of political and geopolitical power. We're entering the Project Manhattan era of AI.

The Manhattan Project analogy is apt and unsettling. Just as nuclear physics went from a matter of pure science to a matter of state power in the early 1940s, AI is undergoing a similar transformation. The question is no longer who can build the best model. It is who gets to decide what the best model does.

The entire AI forecasting apparatus -- scaling laws, capability timelines, benchmark extrapolations, and so on -- operates as if AI progress is a physics problem with logistical and financial constraints. Welcome to the real world.

Romero is telling the technical forecasting community that their models are incomplete. Scaling laws do not account for a president posting on Truth Social at 3:47 PM on a Friday. Capability timelines do not factor in a defense secretary with a grudge. This is not a bug in the forecasting methodology. It is a category error -- treating a political phenomenon as a purely technical one.

One might push back slightly on the permanence Romero claims for this shift. Government interest in controlling powerful technologies is cyclical, not monotonic. The encryption wars of the 1990s saw similar government overreach, followed by decades of relative restraint. The Manhattan Project analogy cuts both ways: nuclear technology did eventually develop civilian applications outside direct government control. Whether AI follows the nuclear path or the encryption path matters enormously, and Romero does not explore the distinction.

The View From Across the Atlantic

Romero's European vantage point is the piece's quiet strength. He does not have a dog in the fight between San Francisco companies, and he is not subject to the gravitational pull of American political tribalism that warps so much domestic AI commentary. When he writes that the drama will be "a speck of dust in an ocean of progress," it reads as genuine perspective rather than contrarianism for its own sake.

Now is when technical people realize that human will was always greater than any scaling law.

That closing line has the ring of something people will quote in retrospect. Whether they quote it approvingly or ironically depends on what happens next.

Bottom Line

Romero has written the most clear-eyed analysis of the Anthropic-Pentagon crisis yet published. The three-act structure keeps a sprawling, fast-moving story disciplined. His central thesis -- that AI has crossed from technology into statecraft -- is both correct and underappreciated. The financial analysis is sharp, the political analysis is sharper, and the European perspective provides the detachment that American commentators, caught up in tribal loyalties, cannot manage. The piece slightly undersells the second-order effects of the supply chain designation and does not fully reckon with historical precedents for government-technology standoffs. But those are quibbles with an otherwise essential read for anyone trying to understand where AI governance is actually headed, as opposed to where the benchmarks say it should be.

Deep Dives

Explore these related deep dives:

  • Superintelligence Amazon · Better World Books by Nick Bostrom

    The canonical argument for why advanced AI poses existential risk.

  • The Age of AI Amazon · Better World Books by Henry Kissinger

    How artificial intelligence will reshape security, politics, and human identity.

  • Anthropic

    The AI company at the center of the controversy with the Pentagon, discussed in the article as having signed a $200M contract with the Department of Defense.

  • Pete Hegseth

    The Defense Secretary who issued the AI strategy memo and set a deadline for Anthropic to agree to 'all lawful purposes' or face consequences.

Sources

AI just entered its Manhattan project era

As promised, I bring you my take on what’s going on between Anthropic, OpenAI, and the Department of Defense. Plenty of people have written about the specific details of the deals and the current situation, so I’ve chosen a different focus. As a European watching the events unfold from the other side of the pond, I hope to compensate for what I lose in distance with what I gain in perspective. I have divided this post into three acts.

Act 1: What happened so far. A quick timeline of all the relevant events. Go over to act 2 if you already know all this.

Act 2: What I think will happen next in the short term (e.g., what will be of Anthropic as a company), and why the outcome doesn’t matter much.

Act 3: What happens when AI is no longer just **a technological matter.

ACT 1: WHAT HAPPENED SO FAR.

Here’s a quick timeline of the relevant events.

July 2025. Anthropic signs a $200 million contract with the Pentagon and becomes the first AI lab deployed on the Department of Defense’s classified network. The contract includes two restrictions: Anthropic’s AI cannot be used for mass surveillance of American citizens or for fully autonomous weapons that select and engage targets without human oversight.

January 2026. Defense Secretary Pete Hegseth issues an AI strategy memo directing that all Department of Defense AI contracts incorporate standard “any lawful use” language within 180 days, a direct collision with Anthropic’s restrictions. Negotiations begin.

February 2026.

Monday, 16. Axios reports the Pentagon is considering designating Anthropic a “supply chain risk to national security”—a classification previously reserved for foreign adversaries like Huawei—and invoking the Defense Production Act (DPA) if Anthropic doesn’t remove its restrictions. A senior defense official says they’re “going to make sure they pay a price.”

Tuesday 24. Hegseth and Dario meet. CNN says the “tone was cordial and respectful” and that Hegseth “praised Anthropic’s products.” Axios says the meeting was “tense.” Whatever the case, Amodei reiterated his red lines and Hegseth set a deadline: 5:01 PM ET Friday 27 to agree to “all lawful purposes” or the contract gets canceled and Anthropic gets designated a supply chain risk.

Thursday 26. Amodei publishes a statement: “We cannot in good conscience accede to their request.” Emil Michael, the Pentagon’s Undersecretary for Research and Engineering, publicly calls Amodei “a liar” with “a God-complex.” More than 600 ...