When the Trump administration recently threatened to designate Anthropic a "supply chain risk" -- effectively cutting off the AI company from working with Nvidia, Microsoft, and Google -- it sent a shockwave through Silicon Valley. But this conflict isn't just about one company's survival. Noah Smith argues it's the opening salvo of a fundamental battle between the corporation and the nation-state over who controls the most powerful technology ever created.
The Fight Over Artificial Intelligence's Future
Smith's piece reads like a warning from history: if AI becomes as powerful as expected, it won't just rival the U.S. military -- it could become more powerful than all nuclear weapons combined. The stakes aren't about partisan politics or contract terms. They're about whether private companies should be allowed to possess what amounts to a weapon of mass destruction.
The conflict between Anthropic and the Department of War represents something far larger than a business dispute. It's a proxy war for a philosophical question that every society will eventually face: who gets to control artificial superintelligence?
The Corporation Versus the State
Smith argues this isn't really about policy details or compliance. Ben Thompson of Stratechery made the case that what we're seeing is a power struggle between the private corporation and the nation-state. The Trump administration may have acted outside established norms, but at the end of the day, the U.S. government is democratically elected -- while Anthropic's leadership is not.
Anthropic's position amounts to saying Dario Amodei, its CEO, should decide what its models are used for, despite not being elected or accountable to the public. The company's concern about "misaligned" AI that might see humanity as a threat sounds reasonable on the surface. But Smith points out it masks something more fundamental: a company seeking to retain ultimate decision-making power over technology that could fundamentally alter the balance of global power.
Artificial Intelligence as a Weapon
Thompson's argument cuts to the core of the matter: if AI is meaningfully as important as nuclear weapons, the United States has far more interest in what Anthropic lets the military do with its models -- and what Anthropic is allowed to do in general. The parallel is striking: if nuclear weapons were developed by a private company that sought to dictate terms to the U.S. military, the U.S. would absolutely destroy that company.
Smith draws on a foundational principle: to exist and carry out its basic functions, a nation-state must have a monopoly on the use of force. If a private militia can defeat the nation-state militarily, the nation-state loses its ability to make laws, provide for common defense, ensure public safety, or execute the will of the people.
This is why the Second Amendment has limits on what kinds of weapons private citizens can possess. You can own a gun, but you cannot own a tank with a functioning main gun. More to the point, you cannot own a nuclear bomb -- one nuke wouldn't allow you to defeat the entire U.S. Military, but it would give you local superiority.
People in the AI industry expect frontier AI to eventually be as powerful as a nuke. Many expect it to be more powerful than all nukes put together.
The God-Emperor Scenario
Smith takes this thought experiment further: if Anthropic wins the race to godlike artificial superintelligence, and if that superintelligence does not become fully autonomous, then Anthropic will be in sole possession of an enslaved living god. And if Dario Amodei personally commands the organization that is in sole possession of an enslaved god -- whether he embraces the title or not -- Dario Amodei becomes the Emperor of Earth.
Even if Anthropic isn't the only company that controls superintelligence, that future still involves a world ruled by a small set of warlords: Dario, Sam Altman, Elon Musk, each with their own private, enslaved god. In this future, the U.S. government is not the government of a nation-state -- it is simply another legacy organization, prostrate and utterly subordinate to the will of the warlords.
The same goes for the Chinese Communist Party, the European Union, Vladimir Putin, and every other government on Earth. The warlords and their enslaved gods will rule the planet in fact, whether they claim to rule or not.
The Inevitable Path Forward
You cannot reasonably expect any nation-state -- a republic, a democracy, or otherwise -- to allow either a god-emperor or a set of god-warlords to emerge. Thus, it is unreasonable to expect any nation-state to fail to try to seize control of frontier AI in some way, as soon as it becomes likely that frontier AI will become a weapon of mass destruction.
Smith's conclusion: as much as he dislikes the Trump administration's style and general pattern of persecution and lawlessness, and as much as he likes Dario and the Anthropic folks personally, he has to conclude that Anthropic and its defenders need to come to grips with the fundamental nature of the nation-state. And then they must decide if they want to try to use their AI to overthrow the nation-state and create a new global order, or submit to the nation-state's monopoly on the use of force.
Critics might note that this argument conflates current AI capabilities with speculative future scenarios. Current AI models are obviously not yet so powerful that they rival the U.S. military -- and predicting god-emperors may be premature. A counterargument also points to historical precedent: the United States nationalized certain technologies during wartime but allowed private innovation in others. The question is whether AI falls into the former category, and whether the current moment represents genuine existential risk or panic about hypothetical futures.
Demanding to keep full control over frontier AI is equivalent to saying a private company should be allowed to possess nukes.
Bottom Line
Smith's strongest argument is the foundational one: nation-states must have monopoly on the use of force, and no society can reasonably allow private companies to control technology that could rival all nuclear weapons combined. His vulnerability is timing -- we haven't seen AI reach that level yet, and acting as if it has already arrived may be premature. The piece's biggest value is forcing readers to confront what kind of world AI could create, even if the current conflict is really about contracts and supply chains.