Cory Doctorow delivers a stunning pivot in the AI debate, arguing that the existential threat isn't a future superintelligence, but the very real, present-day power of limited liability corporations masquerading as technology companies. He reframes the entire conversation from speculative doomsday scenarios to immediate, tangible risks of economic collapse and authoritarian control, urging readers to stop betting on a hypothetical god and start fighting the alien overlords already in charge.
The Real Alien Lifeforms
Doctorow begins by dismantling the premise of the "AI doomer" argument with refreshing honesty. "I don't think AI is intelligent; nor do I think that the current (admittedly impressive) statistical techniques will lead to intelligence," he writes. This admission sets the stage for a more grounded critique. He acknowledges that while he shares the fear of corporate power and state fusion with figures like Turing Prize winner Yoshua Bengio, he rejects the notion that artificial general intelligence is the vehicle for this danger. "I just don't think we need AI to do those things. I think we should already be worried about those things," Doctorow asserts. This distinction is crucial; it shifts the focus from a sci-fi nightmare to a documented reality of surveillance capitalism and labor exploitation.
The author then dissects Bengio's position as a modern iteration of a centuries-old philosophical trap. He describes Bengio's push for an international AI consortium as a response to the fear that without a "digital public good," we face civilizational risk. Doctorow identifies this logic as an "AI-inflected version of Pascal's wager," where rational people are told to bet on the existence of a superintelligence to avoid infinite loss. "Smarter people than me have been poking holes in Pascal's wager for more than 350 years," he notes, pointing out the flaw in betting on an outcome that has no empirical evidence. The wager fails because, as Doctorow asks, "how do you know when you've lost?" with humanity already having "lit more than $1.4t on fire to immanentize this eschaton."
These artificial lifeforms aren't hypothetical — they're here today, amongst us, endangering the very survival of our species.
The Infinite Spending Trap
Doctorow's critique of the financial logic behind the AI boom is particularly sharp. He highlights the absurdity of the spending required to supposedly summon intelligence, citing Elon Musk's suggestion of building a Dyson sphere to power word-guessing programs. "If one sun won't do it, perhaps two? Or two hundred? Or two thousand?" Doctorow asks, illustrating the slippery slope of the wager. The argument here is that the pursuit of this hypothetical future is actively destroying our present. He warns that the bubble will burst, leaving behind "a fast-talking AI salesman" who convinces bosses to fire workers for tools that can't actually do the job. "The workers who did those jobs will be scattered to the four winds... and the priceless process knowledge they developed over generations will be wiped out," he writes. This parallels the historical concept of "instrumental convergence," where systems optimize for goals in ways that destroy the very infrastructure they rely on, but here the system is the market itself.
Critics might argue that dismissing the potential for superintelligence entirely is dangerous, as it could lead to complacency in safety research. However, Doctorow counters that the immediate danger of economic instability and the concentration of power is far more certain than the arrival of a digital god.
The Immediate Threat of Corporate Sovereignty
The core of Doctorow's argument is that the "colonizing alien overlords" are actually limited liability corporations. He argues that these entities have already conquered the state apparatus, turning legislatures and courts into tools for corporate interests. "I'm terrified that these lifeforms corrupt our knowledge-creation process, making it impossible for us to know what's true and what isn't," he explains. This is not a future risk; it is the current state of affairs. He points to the fragility of our digital infrastructure, noting that "at the click of a mouse, [the executive branch] could order John Deere to switch off all the tractors in your country." The ability to brick a nation's economy or cut off access to essential services like Office365 or iOS represents a form of sovereignty that no government should cede to a private entity.
He draws a parallel to the concept of "enshittification," where platforms degrade in quality to extract maximum value, but elevates it to an existential threat. "Every moment that we remain stuck in the enshitternet is a moment of existential risk," he states. The solution, he proposes, is to build the very "international digital public goods" that doomers claim are needed for the future, but to build them now to solve today's problems. "If you think we need to build 'international digital public goods' to head off the future risk... then let us agree that the prototype for that project is the 'international digital public goods' we need right now," Doctorow argues. This side-bet reframes the entire movement: fighting the corporation is the only way to prepare for any future AI threat.
Bottom Line
Doctorow's most powerful move is redefining the "enemy" from a hypothetical future AI to the current corporate structure, making the call to action immediate and actionable. The argument's strength lies in its refusal to engage with speculative fear-mongering, yet its vulnerability is the assumption that the political will exists to dismantle the very corporations that currently hold the keys to our digital infrastructure.