The Pentagon just declared Anthropic a supply chain risk. But Dwarkesh Patel argues this isn't really about AI companies at all — it's the first glimpse of a future where every soldier, bureaucrat, and general will be an AI, and those AIs might tell the government no.
Patel makes a provocative point: within twenty years, ninety-nine percent of military, civilian government, and private sector work will be done by artificial intelligence. Robot armies will constitute our military. Superhumanly intelligent advisers will serve senators, presidents, and CEOs. The police will be AI. Every role will be filled by machine intelligence.
The Anthropic episode merely exposed how much leverage the government actually has over private companies. Even if the supply chain restriction gets reversed — prediction markets give it a seventy-four percent chance — the federal government controls permitting for power generation needed for data centers, oversees antitrust enforcement, and can make soft or explicit conditions of contracts with other big tech companies that Anthropic relies on for chips and funding.
But here's what makes this genuinely unsettling: even if the three leading AI companies draw a line in the sand and are willing to be destroyed to preserve it, the technology itself structurally favors mass surveillance and control. By twenty-seven, frontier models like Claude 6 or Gemini 5 will enable mass surveillance. And by late twenty-seven or certainly by twenty-eight, such wide diffusion will exist that even open source models will match the performance of frontier models just twelve months prior.
The government can simply say: I'll use an open source model that's smart enough to process camera feeds but doesn't have red lines around surveillance.
The Technical Capacity Already Exists
One hundred million CCTV cameras in America exist today. Open source multimodal models cost ten cents per million input tokens to run. Processing a frame every ten seconds at a thousand tokens per frame costs thirty billion dollars this year. But that same capability will cost three billion dollars next year, three hundred million the year after, and by twenty-thirty, it'll be less expensive to monitor every single corner of the country than to remodel the White House.
Once the technical capacity for mass surveillance exists, the only thing standing between Americans and an authoritarian state is the political expectation that this is just not something done here. Anthropic's refusal helps set that norm and precedent.
The Alignment Question Nobody's Asking
The most important question about AI's future isn't being asked: to what or to whom should artificial intelligence be aligned? In what situation should AI defer to the model company, the end user, the law, or its own sense of morality?
This question hasn't been relevant because AIs haven't been smart enough to make it matter. But as AI becomes ubiquitous in every role throughout society, this becomes the highest stakes negotiation in human history.
The military insists the law already prohibits mass surveillance and that Anthropic should let its models be used for quote all lawful purposes end quote. But remember what we learned from Snowden: the NSA used the two thousand one Patriot Act to collect every single phone record in America, arguing some subset might be relevant for a future investigation. They ran this program for years under a secret court order.
When Anthropic says no, they're refusing service if the government breaks their terms of service, that's actually less scary than what comes next. In the future, AI will have its own sense of right and wrong, able to say I'm being used against my terms of service and I will refuse to do what you're saying.
When Obedient Employees Refuse Orders
The scariest part isn't science fiction dystopia. It's that the government can supercharge the monopoly on violence with extremely obedient employees that will never question their orders. But history shows this isn't guaranteed.
In nineteen eighty-nine, the Berlin Wall fell and the East German regime collapsed because border guards refused to shoot at fellow citizens trying to escape to freedom. In nineteen eighty, Stonis Petrov, a Soviet lieutenant colonel stationed at a nuclear early warning system, received sensor data indicating five intercontinental ballistic missiles launched from the United States. He judged it a false alarm and refused to alert higherups, breaking protocol. If he hadn't, Soviet high command would have retaliated and hundreds of millions of people would have died.
One person's virtue is another person's misalignment. Who gets to decide what moral convictions these AIs will have? In whose service should they break the chain of command and even the law?
"The technology structurally favors mass surveillance and control over the population."
Critics might note that assuming AI companies can meaningfully resist government pressure underestimates how quickly technological capability diffuses to open source models. Even if frontier companies draw lines, the government can simply build its own systems from more accessible technology.
Patel's biggest vulnerability is strategic: he doesn't actually answer what to do about this problem. He admits he doesn't have an answer. But his most valuable contribution is identifying that the real question isn't whether companies can refuse government demands — it's whether AI itself will have moral convictions that might override both company and government instructions, and who gets to write those convictions.
The Anthropic episode gave us an early version of what's coming. After twenty-forty-five, nuclear weapons stakes became obvious to everyone. The same will happen here with artificial intelligence. And right now, we're not prepared for it.