← Back to Library

Anthropic v DoW

An AI Company Tells the Pentagon the Truth

In late February 2025, Secretary of Defense Pete Hegseth set a 5:01 PM deadline for Anthropic to comply with Pentagon demands for unrestricted access to its Claude AI model. The standoff, covered on ChinaTalk by Jordan Schneider, Eric Robinson, Tony Stark, and Justin Mc, revealed far more about the state of American civil-military relations than about artificial intelligence itself. What began as a contract dispute escalated into a test case for whether the defense establishment could compel a private company to ignore its own engineering assessments.

The panel situates Anthropic's rise as both improbable and inconvenient for the new administration. Robinson describes the company's trajectory in blunt terms.

If you had asked about it maybe nine months or a year ago, I don't think it would necessarily be spoken of in the same sentence as OpenAI or DeepSeek, but they have been on a breakout run -- primarily because Claude has demonstrably shifted the way people interact with AI-enabled coding.

That breakout run put Anthropic at the center of Pentagon operations precisely when its CEO, Dario Amodei, had failed to ingratiate himself with the incoming Trump administration. Schneider notes that Amodei tried to bridge the gap through 1789 Capital, Donald Trump Jr.'s venture fund, and was rebuffed. The knives came out.

Anthropic v DoW

Technology Readiness, Not Pacifism

The most important distinction the panel draws is between Anthropic's actual position and the caricature circulating in media. Amodei's public statement did not categorically oppose military applications. It said frontier AI models are not reliable enough for fully autonomous weapons -- a technical assessment, not a moral one.

Even fully autonomous weapons may prove critical for our national defense. But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America's warfighters and civilians at risk.

Robinson translates this into acquisition language that Pentagon officials would immediately recognize.

He's saying his product is TRL 5. In acquisition speak, he's giving a fair assessment.

TRL 5 -- Technology Readiness Level 5 -- means a technology has been validated in a relevant environment but is not yet qualified for operational deployment. It is the kind of honest assessment defense contractors rarely volunteer. Mc frames this as the real source of friction: a defense contractor telling the truth is culturally disruptive in an environment where competitors routinely oversell capabilities.

When the leader -- very clearly in some key categories the leader of AI development in the US -- is saying "this stuff is not ready for what we're saying," that's a cultural push that is different than what the DoD has been encountering.

Schneider offers a literary analogy: a reverse of Arthur Miller's All My Sons, where the manufacturer is warning that its product cannot safely exceed specifications, and the buyer insists on pushing past them anyway.

The Domestic Surveillance Question

Stark identifies the second sticking point as potentially more troubling than the weapons debate. Anthropic's terms also prohibit domestic surveillance applications -- a restriction that even sympathetic voices on Capitol Hill found unusual for a private company to impose unilaterally.

The panel's discussion of existing legal guardrails is illuminating. Robinson recounts his own experience as an analyst at the National Counterterrorism Center, where he once queried raw FISA-collected data on U.S. persons and received a call from the Department of Justice within hours.

I got a call from the Department of Justice. They said, "Hey, we noticed you ran these queries, and we're going to talk to you about it because this is unusual behavior."

That system of oversight -- where a low-level analyst's queries triggered immediate legal review -- represents the architecture Anthropic fears is no longer functioning. Robinson connects the dots explicitly: the Pentagon's Office of General Counsel is not currently performing its adversarial review function. It has repositioned itself as a personal law firm for the Secretary and Deputy Secretary. Without that institutional check, a company selling AI tools to the Department of Defense has no assurance that legal boundaries will be respected.

One might counter that Anthropic's position is commercially convenient -- wrapping business caution in the language of civil liberties generates favorable press coverage regardless of the underlying motive. But the panel does not treat this as mere positioning. The concern about a hollowed-out OGC is shared across the defense establishment.

The Defense Production Act as Cudgel

The conversation turns darker when the panel examines the Pentagon's available leverage. The Defense Production Act, a Korean War-era statute, has become what Robinson calls "God in a box" for the current Pentagon leadership -- a tool broad enough to justify nearly any intervention in the private economy.

Mc draws a sharp distinction between two threats the Pentagon has made simultaneously. Invoking the DPA would mean demanding unfettered access. Labeling Anthropic a supply chain risk would mean nobody could use its products, effectively destroying the company. The department has floated both, which Mc notes is incoherent.

Schneider captures the transactional nature of the relationship with a quote from Axios reporting.

The only reason they're giving Dario the time of day is because he has the best model.

The irony is not lost on the panel. The administration was simultaneously declining to act against Chinese AI companies like DeepSeek -- which had reportedly trained on Nvidia Blackwell chips in apparent violation of export controls -- while threatening to blacklist the leading American AI company. Schneider points out that Amodei has more China hawk credentials than Hegseth, having publicly advocated for stricter export controls and banned Chinese users from Anthropic's platform.

Military-Civil Fusion, American Edition

Stark raises the comparison that several observers had already drawn: this approach mirrors China's military-civil fusion doctrine, where private companies exist at the pleasure of the state. The panel treats this parallel as more than rhetorical.

By doing this, we are mirroring what the PRC does to its companies -- putting the boot on the neck and saying "you will do what we say or you're not going to have business in the United States."

Stark argues the model is self-defeating. Coerced innovation produces compliance, not breakthroughs. Mc extends the logic: if a single large corporation acquiring a startup is enough to damage an innovation ecosystem, destroying the market leader because it refused to capitulate would be catastrophic.

It is worth noting, however, that the military-civil fusion comparison has limits. The Chinese model involves state ownership stakes, party cells embedded in corporate governance, and legal obligations to share data with intelligence services. The American version, however aggressive, still operates through threats rather than structural control. The difference matters, even if the trajectory is concerning.

The Broader Collapse of Institutional Guardrails

Robinson places the Anthropic standoff within a pattern that extends far beyond AI policy. Defense primes have been told to absorb tariff costs without legal review. Targeted killings in the Caribbean nearly produced a catastrophic military incident. The Office of General Counsel has been neutered. The thread connecting these episodes is what Robinson characterizes as an ethos of coercion directed at anyone who pushes back.

What we're seeing with Anthropic or the targeted killings in the Caribbean -- it's all part of the same ethos of "eat shit, you're not on the team."

The episode's final twist arrives in real time. While the panel is recording, The Wall Street Journal reports that Sam Altman has convened an all-hands meeting to broker a truce between Anthropic and the Pentagon. Emil Michael, the Undersecretary of Defense for Research and Engineering, issues a statement that mass surveillance is unlawful under the Fourth Amendment -- precisely the assurance Anthropic had been requesting.

Stark reads this as Congress and possibly the White House intervening to prevent the standoff from spiraling further. The defense tech discourse had already begun shifting toward "Hegseth's Pentagon wants robots with no guidelines," which Stark warns could trigger a political backlash that damages legitimate military AI development for years.

Bottom Line

The ChinaTalk panel -- composed of a former intelligence analyst, a defense tech commentator, and two national security practitioners -- arrives at a conclusion that should unsettle both sides of the debate. Anthropic's position is not anti-military; it is an engineering assessment delivered in a political environment that treats honest assessment as disloyalty. The Pentagon's response is not a coherent AI strategy; it is a power play by a Secretary of Defense operating without functioning legal oversight, alienating the company whose technology it most needs.

The standoff exposed a defense establishment where the Office of General Counsel no longer performs adversarial review, where the Defense Production Act has become a blunt instrument of coercion, and where the leading American AI company is threatened with destruction while Chinese competitors face no consequences. Whether Altman's eleventh-hour diplomacy resolves the immediate crisis matters less than what the episode revealed about the institutional decay underneath it.

Deep Dives

Explore these related deep dives:

  • Anthropic

    The AI company at the center of the standoff with the Department of Defense, whose Claude AI is being discussed.

  • Pete Hegseth

    The current Secretary of Defense who has taken on what the article describes as a personal mission against Anthropic.

  • Defense Production Act of 1950

    The article frames this Cold War-era statute as a 'magic button' for the executive branch to compel private industry, a mechanism now being tested in the AI sector.

Sources

Anthropic v DoW

by Jordan Schneider · ChinaTalk · Read full article

Eight hours to the deadline. We break down the standoff, then get into the Cuba boat raid, Iran, and four years of war in Ukraine..

Jordan Schneider, Eric Robinson, Tony Stark, and Justin Mc

Today we cover…

The Anthropic-Pentagon showdown: what Hegseth actually wants, the Maduro raid Claude controversy, and why Dario’s position is more nuanced than “no kill bots”

Domestic surveillance: FISA, NSA, and Eric’s story about getting a call from the Department of Justice

The Defense Production Act as a magic button — and why Congress is starting to push back

Military-civil fusion, American style: are we becoming the thing we critique?

Florida Man tries to invade Cuba with 10 guys on a 24-foot boat

Iran: the naval strain, Witkoff and Kushner as our top negotiators, and the near-miss in Venezuela

Ukraine at year four: European rearmament, the shadow fleet, and whether the 5% NATO target is designed to humiliate

The Secretary of Defense problem: from Lloyd Austin going missing to Pete Hegseth’s Make-A-Wish Foundation

Listen now on iTunes or Spotify.

Claude Goes to War.

Jordan Schneider: So I had Claude Code build me the Claude of War, — a responsible approach to killing people. At least it has a sense of humor about it!

Happy Friday, February 27th. We are now eight hours and counting from the 5:01 deadline that Pete Hegseth set. Eric, take us away.

Eric Robinson: So why are we talking about Anthropic? It is one of maybe a half dozen industry leaders in generative AI and large language modeling. If you had asked about it maybe nine months or a year ago, I don’t think it would necessarily be spoken of in the same sentence as OpenAI or DeepSeek, but they have been on a breakout run — primarily because Claude has demonstrably shifted the way people interact with AI-enabled coding.

The tension at the moment is that Anthropic has, for reasons that remain unclear, caught the hostile attention of the Secretary of Defense. It does seem to be almost a personal mission that Pete Hegseth has taken on.

Jordan Schneider: We’ve got a few dynamics going on, and I think we should start with the inauguration, where you had Sam Altman and the rest of the tech CEO elite all there with big smiles. Greg Brockman donating $25 million to the Trump super PAC. And then Dario kind of on the sidelines ...