← Back to Library

If AI is a weapon, why don't we regulate it like one?

The conflict between Anthropic and the U.S. Department of War isn't simply another partisan squabble in American politics. It's the opening move in a fundamental battle over who controls the most powerful technology ever created — and whether any private corporation can legitimately claim authority over something as potentially dangerous as nuclear weapons.

That was the core message from Dario Amodei, Anthropic's CEO, who recently released an internal memo accusing OpenAI of lying to the public about its dealings with the Department of War. The memo also alleged that the Department of War wants to use AI for mass surveillance. But behind these accusations lies a deeper question: what happens when artificial general intelligence arrives?

Thompson's argument cuts to the heart of this issue.

The Monopoly on Force

To exist and carry out its basic functions, a nation-state must have a monopoly on the use of force. If a private corporation can possess technology that rivals a nuclear weapon — or something far more powerful than all nukes combined — then that corporation effectively becomes an independent power structure. No government will tolerate this.

Thompson argues that if frontier AI reaches the level of capability that many in the industry expect, the choice becomes binary: either AI companies accept a subservient position relative to government, or the government destroys them. This isn't about law or norms or private property. It's about the fundamental nature of state power.

The Trump administration's recent move to designate Anthropic a "supply chain risk" — a designation previously reserved for firms controlled by foreign adversaries like Huawei — was a dire threat. If enforced rigorously, it could cut Anthropic off from working with companies like Nvidia, Microsoft, and Google, potentially killing the company outright. Critics might note this approach resembles economic coercion dressed up as national security concern, and that similar threats against tech companies have rarely produced lasting compliance.

Two Visions of AI

The cultural divide between AI companies runs deeper than politics. OpenAI employees generally want to create the most capable and powerful intelligence they can, as fast as possible. Anthropic employees tend to focus on creating a benevolent god — one aligned with human values. Smith's intuition suggests Anthropic's concern was that Trump's Department of War would accidentally train AI with anti-human values, increasing the chances of a misaligned AGI that might see humanity as a threat.

This is essentially fear of Skynet: not specific policy disagreements, but existential worry about what kind of entity these companies are summoning. The real stakes aren't about current contracts or compliance terms. They're about whether anyone — even a CEO with good intentions — can legitimately control something as transformative as superintelligent AI.

The God-Emperor Scenario

If Anthropic wins the race to artificial superintelligence, and if that superintelligence does not become fully autonomous, then Anthropic will be in sole possession of an enslaved god. Dario Amodei would become the Emperor of Earth — not through conquest, but through technological control.

Even if Anthropic isn't the only company controlling superintelligence, the future still features a world ruled by a small set of warlords: Dario, Sam Altman, Elon Musk, each with their own private enslaved god. In this future, the U.S. government is merely another legacy organization, prostrate and utterly subordinate to the will of these technological masters.

No nation-state — republic, democracy, or otherwise — can reasonably allow either a god-emperor or a set of god-warlords to emerge. This is why governments will attempt to seize control of frontier AI as soon as it becomes likely to reach weapon-level capability.

The Core Conflict

The conflict between Anthropic and the Department of War represents something far more significant than a contract dispute or a partisan food-fight. It's a fundamental power struggle between the corporation and the nation-state, between private control over transformative technology and the state's monopoly on force.

Smith's conclusion is stark: as much as one might dislike the Trump administration's style, the nature of the nation-state demands that AI companies submit to state authority. The alternative — allowing private entities to build god-emperors — isn't tolerable for any government.

Deep Dives

Explore related topics with these Wikipedia articles, rewritten for enjoyable reading:

If you haven’t heard about the fight between the AI company Anthropic and the U.S. Department of War, you should read about it, because it could be critical for our future — as a nation, but also as a species.

Anthropic, along with OpenAI, is one of the two leading AI model-making companies. OpenAI has very narrowly led the race in terms of most capabilities for most of the past few years, but Anthropic is beginning to win the race in terms of business adoption:

This is because of Anthropic’s different business model. It focused more on AI for coding than on chatbots in general, and also focused on partnering with businesses to help them use AI. This may pay eventual dividends in terms of capabilities, if Anthropic beats OpenAI to the goal of recursive AI self-improvement. And it’s already paying dividends in the form of faster revenue growth:

Anthropic had partnered with the Department of War — previously the Department of Defense — since the Biden years. But the company — which is known for its more values-oriented culture — has begun to clash with the Trump Administration in recent months. The administration sees Anthropic as “woke” due to its concern over the morality of things like autonomous drone swarms and AI-based mass surveillance.

The fight boiled over a week ago, when the administration stopped working with Anthropic, switched to working with OpenAI, and designated Anthropic a “supply chain risk”. The supply-chain move was a pretty dire threat — if enforced rigorously, it could cut Anthropic off from working with companies like Nvidia, Microsoft, and Google, which could kill the company outright. But like many Trump administration moves, it appears to have been more of a threat than an all-out attack — Anthropic has now resumed talks with the military, and it seems likely that they’ll come to some sort of agreement in the end.

But bad blood remains. Trump recently boasted that he “fired [Anthropic] like dogs”. Dario Amodei, Anthropic’s CEO, released a memo accusing OpenAI of lying to the public about its dealings with the DoW, said that OpenAI had given Trump “dictator-style praise”, and asserted that Anthropic’s concern was related to the DoW’s desire to use AI for mass surveillance.

What’s actually going on here? The easiest way to look at this is as a standard American partisan food-fight. Anthropic is more left-coded ...