The "All Lawful Use" Loophole
In February 2026, Secretary of War Pete Hegseth designated Anthropic a "supply chain risk" -- the first time that label had ever been applied to a domestic American company. The trigger was Anthropic's refusal to grant the Department of War unrestricted access to its AI models for mass surveillance and autonomous weapons. Hours later, OpenAI stepped into the vacuum. Scott Alexander, writing on Astral Codex Ten with a team of anonymous national security researchers, dissects the legal architecture underlying this deal and finds it disturbingly hollow.
The post is long, careful, and structured like a legal brief. It is not the typical Alexander essay -- no thought experiments, no detours into evolutionary psychology. This is lawyering. And the central finding is that the phrase "all lawful use," which anchors OpenAI's contract with the Department of War, provides almost no meaningful constraint on what the government can do with these models.
Mass Surveillance Is Already Legal
The most unsettling section of Alexander's analysis concerns domestic surveillance. The common assumption -- that American law robustly prohibits the government from monitoring its own citizens at scale -- turns out to be largely wrong. The law draws a distinction between "collecting" data and merely "gathering" it, a distinction that ordinary language would not recognize.
The government reserves the term "mass domestic surveillance" for the thing they don't do (querying their databases en masse), preferring terms like "gathering" for what they do do (creating the databases en masse).
This semantic game has real consequences. A Director of National Intelligence once denied under oath that the NSA collects data on millions of Americans. By the government's own definitions, he was telling the truth. By any plain reading, he was lying.
The deeper problem is third-party data. The government can simply buy information from Facebook, cell phone carriers, or data brokers, and once purchased, it can analyze that data without a warrant. The 2018 Supreme Court carve-out for cell phone location data is narrow. Everything else is fair game.
AI solves these scale and cost problems. An AI could perform meaningful search of all messages in a large database, piecing together patterns to, for example, give each citizen a "presumed loyalty" score.
That sentence should stop any reader cold. Alexander is not speculating about science fiction. He is describing a capability that exists today and that current law does nothing to prevent.
Autonomous Weapons and the Flexibility Problem
On autonomous weapons, Alexander and his co-authors find that Congressional law is essentially silent. The only regulations specific to autonomous weapons come from Department of War policy -- specifically DoD Directive 3000.09. That directive requires "appropriate levels of human judgment over the use of force." The word "appropriate" does the heavy lifting, and it lifts nothing at all.
It doesn't define "appropriate", and the US government has stated it "is a flexible term" where what qualifies "can differ across weapon systems, domains of warfare, types of warfare, operational contexts, and even across different functions in a weapon system."
A standard that can mean anything means nothing. And because the Department of War sets its own policies, any contract that merely promises compliance with existing policy gives the DoW unilateral power to rewrite the rules whenever convenient.
Alexander acknowledges the legitimate case for autonomous weapons. Missile defense systems already operate autonomously, and the battlefield reality in Ukraine demonstrates the military value of AI-driven systems. But he draws a sharp distinction between narrow autonomy in well-defined scenarios and the wholesale replacement of human judgment in the kill chain.
Human soldiers are a check on the worst abuses of authoritarians. Sometimes a strongman will give an illegal order -- to shoot at protesters, to initiate an auto-coup, to begin a genocide -- and soldiers will say no.
This is the most important paragraph in the piece. The argument for keeping humans in the loop is not primarily about accuracy or reliability. It is about political accountability. A robotic force that automatically obeys orders removes the last institutional check on authoritarian overreach.
OpenAI's Contract Under Scrutiny
The post's most technically detailed section examines OpenAI's public FAQ about the deal. Alexander's team consults with an unnamed national security law expert and finds the FAQ misleading on nearly every point.
OpenAI claims its "cloud-only deployment" prevents autonomous weapons use. Alexander points out that drones can be operated remotely from the cloud, just as human operators already control drones remotely. Edge deployment is not required to power a fully autonomous weapon.
Autonomous weapons can be steered by an AI in the cloud, just like a human can steer a drone remotely. OpenAI models do not need to be edge deployed in order to power a fully autonomous weapon.
OpenAI's Head of National Security Partnerships has stated that "all lawful use" was intended to mean the law as it existed when the contract was signed. Alexander notes this is not how contract law typically works. Brad Carson, former general counsel of the Army and former undersecretary of Defense, agrees -- the contract language would not freeze federal law in place absent explicit provisions saying so.
The first clause says "The Department of War may use the AI System for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols." Our understanding is that later clauses do not automatically override this first clause.
One area where the analysis could be stronger: the post acknowledges that OpenAI may have technical safeguards -- essentially, a safety stack that would block certain uses regardless of what the contract permits. Boaz Barak, a computer scientist, has suggested this is the real enforcement mechanism. But Alexander's team cannot evaluate what they have not seen, and OpenAI has not made these safeguards public. If technical controls are the actual linchpin, it is strange that OpenAI emphasizes legal arguments instead.
The Anthropic Contrast
Running beneath the legal analysis is a simpler story about corporate courage. Anthropic refused the deal. Anthropic's CEO Dario Amodei published a letter explaining his concerns about third-party data analysis and the absence of meaningful restrictions. The government responded by branding his company a supply chain risk -- an extraordinary act of retaliation against a private firm for declining a contract.
OpenAI accepted the deal within hours. Sam Altman claimed he had secured equivalent safeguards. Alexander's team spent thousands of words demonstrating that he almost certainly had not.
A fair counterpoint: Anthropic's refusal may have been strategically rational rather than purely principled. By declining, Anthropic positioned itself as the responsible actor in the AI safety narrative, which is central to its brand and fundraising story. The moral calculus is not as clean as it first appears. Still, whatever the motivation, Anthropic's position produced a concrete outcome -- it forced a public reckoning with what "all lawful use" actually means.
What Gets Lost in Legalese
Alexander closes with a list of pointed questions for journalists, lawmakers, and OpenAI employees. The questions are well-chosen and reveal the gaps in public knowledge: Does the contract exclude the NSA? Who arbitrates disputes about lawfulness? What happens if the DoW demands the safety stack be weakened?
Given that existing statements haven't always been clear and Anthropic has alleged that the contract contains "legalese that would allow those safeguards to be disregarded at will", we encourage you to read any responses you receive with a skeptical mindset.
The piece is at its best when it translates legal jargon into plain consequences. The distinction between "gathering" and "collecting," the vagueness of "appropriate" human judgment, the meaninglessness of referencing policies that can be unilaterally changed -- these are the details that matter, and Alexander surfaces them with precision.
Where the analysis is weaker is on the question of alternatives. If every AI company refused the Department of War, the government would simply build its own models or turn to less scrupulous vendors. The piece implicitly assumes that corporate refusal is a viable check on state power, but the history of government procurement suggests otherwise. The Pentagon has never lacked for willing contractors.
Bottom Line
Alexander and his anonymous co-authors have produced the most thorough public analysis of the OpenAI-Department of War contract to date. Their central conclusion is damning: the phrase "all lawful use" is not a safeguard but an invitation. Current law permits mass analysis of purchased citizen data, autonomous weapons with undefined human oversight, and surveillance programs that exploit the gap between legal definitions and common understanding. OpenAI's reassurances rest on contract language that national security lawyers find unpersuasive and on technical safeguards that remain invisible to the public. The piece does not argue that AI should never be used for national security. It argues that the current deal has no meaningful guardrails -- and that the public deserves to know what has been agreed to in its name.