The Pentagon Wants Its AI Unleashed
Defense Secretary Pete Hegseth has given Anthropic, the maker of the AI system Claude, a deadline to drop two contractual restrictions: a prohibition on mass surveillance of Americans and a prohibition on autonomous weapons that can kill without direct human oversight. The ultimatum, reported by Andrew Egger at The Bulwark, amounts to one of the most aggressive government moves against a private AI company to date.
The stakes are not abstract. Claude is already embedded in classified Defense Department operations. Egger reports it was "reportedly involved in the operation to capture Nicolas Maduro." Anthropic is the only AI lab contracted for classified use, and Hegseth does not want to start over.
Defense Secretary Pete Hegseth has grown unhappy with two elements of the DoD's contract with Anthropic. One, Anthropic won't let its AI be used to conduct mass surveillance of Americans. Two, it won't let the DoD use it to operate autonomous weapons systems that can identify, track, and kill targets without direct human involvement.
Rather than simply walking away and signing with a more compliant AI lab, Hegseth is wielding two contradictory threats. He could invoke the Defense Production Act to force Anthropic to comply. Or he could declare Anthropic a "supply chain risk," which would blacklist the company from the entire defense ecosystem. The fact that both options are on the table reveals the nature of the play.
The fact that DoD is considering both possibilities -- making it illegal for Anthropic not to work with them and making it illegal for Anthropic to work with anybody DoD works with -- makes it pretty clear that all this is a pure squeeze play.
What the AI Policy World Thinks
Egger spoke with Dean Ball, a senior fellow at the Foundation for American Innovation who previously held a senior AI policy role in the Trump White House. Ball is no progressive critic. He helped develop the administration's own AI Action Plan. And he is alarmed.
"I will say this in no uncertain terms, bipartisan, regardless of administration. This would be one of the worst things for the American business climate I have ever seen the government do."
That assessment carries weight precisely because of its source. Ball acknowledged that both sides had reasonable starting positions: the government wants control of its military tools, and Anthropic does not want to participate in certain use cases. A clean breakup would make sense. But the coercive ultimatum transforms a contract dispute into something far more ominous for every technology company that does business with the federal government.
Egger notes the White House offered no defense of the move's coherence with its broader AI strategy, which has "placed a huge priority on unleashing U.S. AI capabilities as part of a global AI arms race against China." The White House referred him to the Pentagon. The Pentagon did not respond.
The Legislative Vacuum
Congressional response has been thin. Representative Zoe Lofgren, ranking member of the House Committee on Science, Space, and Technology, was one of the few Democrats willing to speak up.
"Anthropic is trying to do the right thing and put their own guardrails in place in the absence of legislation. It should go without saying that AI technology should not be making potentially lethal decisions without human involvement. I fear what America will become if the DoD is given this unrestricted power."
Lofgren's phrase "in the absence of legislation" lands hard. The question of when and whether AI systems should be authorized to kill without a human in the loop is among the most consequential policy decisions of the coming decade. It is currently being decided through contract negotiations and executive bullying rather than through any democratic process.
While it's nice that Anthropic is digging in their heels here, it's insane that such questions as "how much killing will we let the killer robots do on their own" are being hashed out as back-room handshakes between the military and its AI contractors in the first place. This seems like a matter of public policy if ever there was one.
Egger is right that Congress has abdicated here. But it is worth noting that the legal landscape is murkier than his framing might suggest. DoD Directive 3000.09 already permits lethal autonomous weapon systems under certain conditions, requiring only that humans exercise "appropriate levels of human judgment" -- a phrase elastic enough to cover nearly anything. As a Congressional Research Service paper noted, this means humans must be involved in decisions about deployment, but the system can then be "let off the leash" during operations. Anthropic's contractual restrictions were, in a sense, stricter than existing Pentagon policy.
A Chilling Precedent for Business
The broader implications extend well beyond AI and defense. If the government can use the Defense Production Act to compel a company to remove its own ethical guardrails, or alternatively destroy that company's market position by branding it a security risk, the message to every tech firm is clear: cooperate fully or face ruin.
One counterpoint deserves mention. The Pentagon's argument that it should control how it uses tools it pays for is not inherently unreasonable. Defense contractors have never traditionally dictated the terms of military operations. Anthropic's position, however principled, does represent an unusual assertion of private-sector veto power over government military capabilities. The question is whether AI weapons are so categorically different that such a veto is warranted -- and most arms control experts would say yes.
Hegseth could simply drop Anthropic's contract over this, pivoting instead to any of the AI labs -- OpenAI, Google Gemini, Elon Musk's xAI -- that aren't insisting on these contractual sticking points. But he doesn't really want to.
That reluctance is revealing. If Claude were easily replaceable, none of these threats would be necessary. The coercion is itself an admission of Anthropic's leverage -- and of how deeply AI is already woven into military infrastructure.
To the Defense Department, the idea that a contractor would be able to tie the military's hands like this is outlandish; they should be permitted, they argue, to use AI they contract for "for all lawful purposes."
Bottom Line
Egger's reporting exposes a standoff that should concern anyone who cares about democratic governance, corporate autonomy, or the future of warfare. The Pentagon wants AI that kills without human oversight. One company is saying no. And the government's response is not to find another vendor or to let Congress weigh in, but to threaten destruction. Whether Anthropic holds the line or folds, the precedent being set -- that the executive branch can coerce private companies into removing their own safety restrictions on lethal technology -- will outlast this particular fight. Congress has the authority to settle this question. Its silence is the most damning detail in the entire story.