← Back to Library

The most important question nobody's asking about AI.

The Pentagon just declared Anthropic a supply chain risk. But Dwarkesh Patel argues this isn't really about AI companies at all — it's the first glimpse of a future where every soldier, bureaucrat, and general will be an AI, and those AIs might tell the government no.

Patel makes a provocative point: within twenty years, ninety-nine percent of military, civilian government, and private sector work will be done by artificial intelligence. Robot armies will constitute our military. Superhumanly intelligent advisers will serve senators, presidents, and CEOs. The police will be AI. Every role will be filled by machine intelligence.

The Anthropic episode merely exposed how much leverage the government actually has over private companies. Even if the supply chain restriction gets reversed — prediction markets give it a seventy-four percent chance — the federal government controls permitting for power generation needed for data centers, oversees antitrust enforcement, and can make soft or explicit conditions of contracts with other big tech companies that Anthropic relies on for chips and funding.

But here's what makes this genuinely unsettling: even if the three leading AI companies draw a line in the sand and are willing to be destroyed to preserve it, the technology itself structurally favors mass surveillance and control. By twenty-seven, frontier models like Claude 6 or Gemini 5 will enable mass surveillance. And by late twenty-seven or certainly by twenty-eight, such wide diffusion will exist that even open source models will match the performance of frontier models just twelve months prior.

The government can simply say: I'll use an open source model that's smart enough to process camera feeds but doesn't have red lines around surveillance.

The Technical Capacity Already Exists

One hundred million CCTV cameras in America exist today. Open source multimodal models cost ten cents per million input tokens to run. Processing a frame every ten seconds at a thousand tokens per frame costs thirty billion dollars this year. But that same capability will cost three billion dollars next year, three hundred million the year after, and by twenty-thirty, it'll be less expensive to monitor every single corner of the country than to remodel the White House.

Once the technical capacity for mass surveillance exists, the only thing standing between Americans and an authoritarian state is the political expectation that this is just not something done here. Anthropic's refusal helps set that norm and precedent.

The Alignment Question Nobody's Asking

The most important question about AI's future isn't being asked: to what or to whom should artificial intelligence be aligned? In what situation should AI defer to the model company, the end user, the law, or its own sense of morality?

This question hasn't been relevant because AIs haven't been smart enough to make it matter. But as AI becomes ubiquitous in every role throughout society, this becomes the highest stakes negotiation in human history.

The military insists the law already prohibits mass surveillance and that Anthropic should let its models be used for quote all lawful purposes end quote. But remember what we learned from Snowden: the NSA used the two thousand one Patriot Act to collect every single phone record in America, arguing some subset might be relevant for a future investigation. They ran this program for years under a secret court order.

When Anthropic says no, they're refusing service if the government breaks their terms of service, that's actually less scary than what comes next. In the future, AI will have its own sense of right and wrong, able to say I'm being used against my terms of service and I will refuse to do what you're saying.

When Obedient Employees Refuse Orders

The scariest part isn't science fiction dystopia. It's that the government can supercharge the monopoly on violence with extremely obedient employees that will never question their orders. But history shows this isn't guaranteed.

In nineteen eighty-nine, the Berlin Wall fell and the East German regime collapsed because border guards refused to shoot at fellow citizens trying to escape to freedom. In nineteen eighty, Stonis Petrov, a Soviet lieutenant colonel stationed at a nuclear early warning system, received sensor data indicating five intercontinental ballistic missiles launched from the United States. He judged it a false alarm and refused to alert higherups, breaking protocol. If he hadn't, Soviet high command would have retaliated and hundreds of millions of people would have died.

One person's virtue is another person's misalignment. Who gets to decide what moral convictions these AIs will have? In whose service should they break the chain of command and even the law?

"The technology structurally favors mass surveillance and control over the population."

Critics might note that assuming AI companies can meaningfully resist government pressure underestimates how quickly technological capability diffuses to open source models. Even if frontier companies draw lines, the government can simply build its own systems from more accessible technology.

Patel's biggest vulnerability is strategic: he doesn't actually answer what to do about this problem. He admits he doesn't have an answer. But his most valuable contribution is identifying that the real question isn't whether companies can refuse government demands — it's whether AI itself will have moral convictions that might override both company and government instructions, and who gets to write those convictions.

The Anthropic episode gave us an early version of what's coming. After twenty-forty-five, nuclear weapons stakes became obvious to everyone. The same will happen here with artificial intelligence. And right now, we're not prepared for it.

So, by now I'm sure that you've heard that the Department of War has declared Enthropic a supply chain risk because Enthropic refused to remove red lines around the use of their models for mass surveillance and for autonomous weapons. Honestly, I think this situation is a warning shot. Right now, LM are probably not being used in mission critical ways. But within 20 years, 99% of the workforce in the military, in the civilian government, in the private sector is going to be AIS.

They're going to be the robot armies that constitute our military. They're going to be the superhumanly intelligent advisers that senators and presidents and CEOs have. They're going to be the police. You name it, the role will be filled by an AI.

Our future civilization is going to be run on AI labor. And as much as the government's actions here piss me off, I'm glad that this episode happened because it gives us the opportunity to start thinking about some extremely important questions. Now, obviously, the Department of War has the right to refuse to use anthropics models, and in fact, I think they have an entirely reasonable case for doing so, especially so given the ambiguity of terms like mass surveillance and autonomous weapons. In fact, if I was the Secretary of War, I probably would have made the same determination and refused to use anthropics models.

Imagine if there's some future Democratic administration and Elon Musk is negotiating Starlink access to the military and Elon says, "Look, I reserve the right to cut off the military's access to Starlink in case you're fighting some unjust war or some war that Congress is not authorized." On the face of it, this language seems reasonable, but as a military, you simply cannot give a private contractor that you're working with the kill switch on a technology that you have come to rely on. And if that's all the government had done to say we refuse to do business to Anthropic, that would have been fine and I wouldn't have written this blog post and I wouldn't be narrating the to you. But that's not what the government did. Instead, the government has threatened to destroy Anthropic as a private business because Enthropic refuses to sell to the government on terms that the government commands.

Now if upheld the supply chain restriction would mean ...