← Back to Library

"All lawful use": Much more than you wanted to know

The "All Lawful Use" Loophole

In February 2026, Secretary of War Pete Hegseth designated Anthropic a "supply chain risk" -- the first time that label had ever been applied to a domestic American company. The trigger was Anthropic's refusal to grant the Department of War unrestricted access to its AI models for mass surveillance and autonomous weapons. Hours later, OpenAI stepped into the vacuum. Scott Alexander, writing on Astral Codex Ten with a team of anonymous national security researchers, dissects the legal architecture underlying this deal and finds it disturbingly hollow.

The post is long, careful, and structured like a legal brief. It is not the typical Alexander essay -- no thought experiments, no detours into evolutionary psychology. This is lawyering. And the central finding is that the phrase "all lawful use," which anchors OpenAI's contract with the Department of War, provides almost no meaningful constraint on what the government can do with these models.

"All lawful use": Much more than you wanted to know

Mass Surveillance Is Already Legal

The most unsettling section of Alexander's analysis concerns domestic surveillance. The common assumption -- that American law robustly prohibits the government from monitoring its own citizens at scale -- turns out to be largely wrong. The law draws a distinction between "collecting" data and merely "gathering" it, a distinction that ordinary language would not recognize.

The government reserves the term "mass domestic surveillance" for the thing they don't do (querying their databases en masse), preferring terms like "gathering" for what they do do (creating the databases en masse).

This semantic game has real consequences. A Director of National Intelligence once denied under oath that the NSA collects data on millions of Americans. By the government's own definitions, he was telling the truth. By any plain reading, he was lying.

The deeper problem is third-party data. The government can simply buy information from Facebook, cell phone carriers, or data brokers, and once purchased, it can analyze that data without a warrant. The 2018 Supreme Court carve-out for cell phone location data is narrow. Everything else is fair game.

AI solves these scale and cost problems. An AI could perform meaningful search of all messages in a large database, piecing together patterns to, for example, give each citizen a "presumed loyalty" score.

That sentence should stop any reader cold. Alexander is not speculating about science fiction. He is describing a capability that exists today and that current law does nothing to prevent.

Autonomous Weapons and the Flexibility Problem

On autonomous weapons, Alexander and his co-authors find that Congressional law is essentially silent. The only regulations specific to autonomous weapons come from Department of War policy -- specifically DoD Directive 3000.09. That directive requires "appropriate levels of human judgment over the use of force." The word "appropriate" does the heavy lifting, and it lifts nothing at all.

It doesn't define "appropriate", and the US government has stated it "is a flexible term" where what qualifies "can differ across weapon systems, domains of warfare, types of warfare, operational contexts, and even across different functions in a weapon system."

A standard that can mean anything means nothing. And because the Department of War sets its own policies, any contract that merely promises compliance with existing policy gives the DoW unilateral power to rewrite the rules whenever convenient.

Alexander acknowledges the legitimate case for autonomous weapons. Missile defense systems already operate autonomously, and the battlefield reality in Ukraine demonstrates the military value of AI-driven systems. But he draws a sharp distinction between narrow autonomy in well-defined scenarios and the wholesale replacement of human judgment in the kill chain.

Human soldiers are a check on the worst abuses of authoritarians. Sometimes a strongman will give an illegal order -- to shoot at protesters, to initiate an auto-coup, to begin a genocide -- and soldiers will say no.

This is the most important paragraph in the piece. The argument for keeping humans in the loop is not primarily about accuracy or reliability. It is about political accountability. A robotic force that automatically obeys orders removes the last institutional check on authoritarian overreach.

OpenAI's Contract Under Scrutiny

The post's most technically detailed section examines OpenAI's public FAQ about the deal. Alexander's team consults with an unnamed national security law expert and finds the FAQ misleading on nearly every point.

OpenAI claims its "cloud-only deployment" prevents autonomous weapons use. Alexander points out that drones can be operated remotely from the cloud, just as human operators already control drones remotely. Edge deployment is not required to power a fully autonomous weapon.

Autonomous weapons can be steered by an AI in the cloud, just like a human can steer a drone remotely. OpenAI models do not need to be edge deployed in order to power a fully autonomous weapon.

OpenAI's Head of National Security Partnerships has stated that "all lawful use" was intended to mean the law as it existed when the contract was signed. Alexander notes this is not how contract law typically works. Brad Carson, former general counsel of the Army and former undersecretary of Defense, agrees -- the contract language would not freeze federal law in place absent explicit provisions saying so.

The first clause says "The Department of War may use the AI System for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols." Our understanding is that later clauses do not automatically override this first clause.

One area where the analysis could be stronger: the post acknowledges that OpenAI may have technical safeguards -- essentially, a safety stack that would block certain uses regardless of what the contract permits. Boaz Barak, a computer scientist, has suggested this is the real enforcement mechanism. But Alexander's team cannot evaluate what they have not seen, and OpenAI has not made these safeguards public. If technical controls are the actual linchpin, it is strange that OpenAI emphasizes legal arguments instead.

The Anthropic Contrast

Running beneath the legal analysis is a simpler story about corporate courage. Anthropic refused the deal. Anthropic's CEO Dario Amodei published a letter explaining his concerns about third-party data analysis and the absence of meaningful restrictions. The government responded by branding his company a supply chain risk -- an extraordinary act of retaliation against a private firm for declining a contract.

OpenAI accepted the deal within hours. Sam Altman claimed he had secured equivalent safeguards. Alexander's team spent thousands of words demonstrating that he almost certainly had not.

A fair counterpoint: Anthropic's refusal may have been strategically rational rather than purely principled. By declining, Anthropic positioned itself as the responsible actor in the AI safety narrative, which is central to its brand and fundraising story. The moral calculus is not as clean as it first appears. Still, whatever the motivation, Anthropic's position produced a concrete outcome -- it forced a public reckoning with what "all lawful use" actually means.

What Gets Lost in Legalese

Alexander closes with a list of pointed questions for journalists, lawmakers, and OpenAI employees. The questions are well-chosen and reveal the gaps in public knowledge: Does the contract exclude the NSA? Who arbitrates disputes about lawfulness? What happens if the DoW demands the safety stack be weakened?

Given that existing statements haven't always been clear and Anthropic has alleged that the contract contains "legalese that would allow those safeguards to be disregarded at will", we encourage you to read any responses you receive with a skeptical mindset.

The piece is at its best when it translates legal jargon into plain consequences. The distinction between "gathering" and "collecting," the vagueness of "appropriate" human judgment, the meaninglessness of referencing policies that can be unilaterally changed -- these are the details that matter, and Alexander surfaces them with precision.

Where the analysis is weaker is on the question of alternatives. If every AI company refused the Department of War, the government would simply build its own models or turn to less scrupulous vendors. The piece implicitly assumes that corporate refusal is a viable check on state power, but the history of government procurement suggests otherwise. The Pentagon has never lacked for willing contractors.

Bottom Line

Alexander and his anonymous co-authors have produced the most thorough public analysis of the OpenAI-Department of War contract to date. Their central conclusion is damning: the phrase "all lawful use" is not a safeguard but an invitation. Current law permits mass analysis of purchased citizen data, autonomous weapons with undefined human oversight, and surveillance programs that exploit the gap between legal definitions and common understanding. OpenAI's reassurances rest on contract language that national security lawyers find unpersuasive and on technical safeguards that remain invisible to the public. The piece does not argue that AI should never be used for national security. It argues that the current deal has no meaningful guardrails -- and that the public deserves to know what has been agreed to in its name.

Deep Dives

Explore these related deep dives:

  • The Age of Surveillance Capitalism Amazon · Better World Books by Shoshana Zuboff

    How tech companies turned human experience into raw material for prediction and control.

  • Snowden disclosures

    The article discusses mass domestic surveillance and the loopholes in current laws against it, which directly relates to NSA surveillance programs.

  • Lethal autonomous weapon

    The excerpt specifically mentions 'autonomous weapons' as one of the concerns about Department of War using AI for mass surveillance and weapons.

  • Foreign Intelligence Surveillance Act

    The article discusses Executive Branch's authority for foreign intelligence surveillance and the legal framework around it, which is directly related to FISA and the Foreign Intelligence Surveillance Court.

Sources

"All lawful use": Much more than you wanted to know

by Scott Alexander · Astral Codex Ten · Read full article

Last Friday, Secretary of War Pete Hegseth declared AI company Anthropic a “supply chain risk”, the first time this designation has ever been applied to a US company. The trigger for the move was Anthropic’s refusal to allow the Department of War to use their AIs for mass surveillance and autonomous weapons.

A few hours later, Hegseth and Sam Altman declared an agreement-in-principle for OpenAI’s models to be used in the niche vacated by Anthropic. Altman stated that he had received guarantees that OpenAI’s models wouldn’t be used for mass surveillance or autonomous weapons either, but given Hegseth’s unwillingness to concede these points with Anthropic, observers speculated that the safeguards in Altman’s contract must be weaker or, in a worst-case scenario, completely toothless.

The debate centers on the Department of War’s demand that AIs be permitted for “all lawful use”. Anthropic worried that mass surveillance and autonomous weaponry would de facto fall in this category; Hegseth and Altman have tried to reassure the public that they won’t, and the parts of their agreement that have leaked to the public cite the statutes that Altman expects to constrain this category. Altman’s initial statement seemed to suggest additional prohibitions, but on a closer read, provides little tangible evidence of meaningful further restrictions.

Some alert ACX readers1 have done a deep dive into national security law to try to untangle the situation. Their conclusion mirrors that of Anthropic and the majority of Twitter commenters: this is not enough. Current laws against domestic mass surveillance and autonomous weapons have wide loopholes in practice. Further, many of the rules which do exist can be changed by the Department of War at any time. Although OpenAI’s national security lead said that “we intended [the phrase ‘all lawful use’] to mean [according to the law] at the time the contract is signed’, this is not how contract law usually works, and not how the provision is likely to be enforced2. Therefore, these guarantees are not helpful.

[EDIT: To clarify: The DoW can change their own policies at will, but can’t change laws. In addition to OpenAI’s claim of being robust to changing laws, OpenAI argues that they’re protected against changes to DoW policies because they explicitly reference the relevant policies as they exist today. Based on public information, this argument seems dubious. See ‘Comments on OpenAI’s FAQ’ below.]

To learn more about the details, let’s look at ...