← Back to Library

A Pascal’s wager for AI doomers

Cory Doctorow delivers a stunning pivot in the AI debate, arguing that the existential threat isn't a future superintelligence, but the very real, present-day power of limited liability corporations masquerading as technology companies. He reframes the entire conversation from speculative doomsday scenarios to immediate, tangible risks of economic collapse and authoritarian control, urging readers to stop betting on a hypothetical god and start fighting the alien overlords already in charge.

The Real Alien Lifeforms

Doctorow begins by dismantling the premise of the "AI doomer" argument with refreshing honesty. "I don't think AI is intelligent; nor do I think that the current (admittedly impressive) statistical techniques will lead to intelligence," he writes. This admission sets the stage for a more grounded critique. He acknowledges that while he shares the fear of corporate power and state fusion with figures like Turing Prize winner Yoshua Bengio, he rejects the notion that artificial general intelligence is the vehicle for this danger. "I just don't think we need AI to do those things. I think we should already be worried about those things," Doctorow asserts. This distinction is crucial; it shifts the focus from a sci-fi nightmare to a documented reality of surveillance capitalism and labor exploitation.

A Pascal’s wager for AI doomers

The author then dissects Bengio's position as a modern iteration of a centuries-old philosophical trap. He describes Bengio's push for an international AI consortium as a response to the fear that without a "digital public good," we face civilizational risk. Doctorow identifies this logic as an "AI-inflected version of Pascal's wager," where rational people are told to bet on the existence of a superintelligence to avoid infinite loss. "Smarter people than me have been poking holes in Pascal's wager for more than 350 years," he notes, pointing out the flaw in betting on an outcome that has no empirical evidence. The wager fails because, as Doctorow asks, "how do you know when you've lost?" with humanity already having "lit more than $1.4t on fire to immanentize this eschaton."

These artificial lifeforms aren't hypothetical — they're here today, amongst us, endangering the very survival of our species.

The Infinite Spending Trap

Doctorow's critique of the financial logic behind the AI boom is particularly sharp. He highlights the absurdity of the spending required to supposedly summon intelligence, citing Elon Musk's suggestion of building a Dyson sphere to power word-guessing programs. "If one sun won't do it, perhaps two? Or two hundred? Or two thousand?" Doctorow asks, illustrating the slippery slope of the wager. The argument here is that the pursuit of this hypothetical future is actively destroying our present. He warns that the bubble will burst, leaving behind "a fast-talking AI salesman" who convinces bosses to fire workers for tools that can't actually do the job. "The workers who did those jobs will be scattered to the four winds... and the priceless process knowledge they developed over generations will be wiped out," he writes. This parallels the historical concept of "instrumental convergence," where systems optimize for goals in ways that destroy the very infrastructure they rely on, but here the system is the market itself.

Critics might argue that dismissing the potential for superintelligence entirely is dangerous, as it could lead to complacency in safety research. However, Doctorow counters that the immediate danger of economic instability and the concentration of power is far more certain than the arrival of a digital god.

The Immediate Threat of Corporate Sovereignty

The core of Doctorow's argument is that the "colonizing alien overlords" are actually limited liability corporations. He argues that these entities have already conquered the state apparatus, turning legislatures and courts into tools for corporate interests. "I'm terrified that these lifeforms corrupt our knowledge-creation process, making it impossible for us to know what's true and what isn't," he explains. This is not a future risk; it is the current state of affairs. He points to the fragility of our digital infrastructure, noting that "at the click of a mouse, [the executive branch] could order John Deere to switch off all the tractors in your country." The ability to brick a nation's economy or cut off access to essential services like Office365 or iOS represents a form of sovereignty that no government should cede to a private entity.

He draws a parallel to the concept of "enshittification," where platforms degrade in quality to extract maximum value, but elevates it to an existential threat. "Every moment that we remain stuck in the enshitternet is a moment of existential risk," he states. The solution, he proposes, is to build the very "international digital public goods" that doomers claim are needed for the future, but to build them now to solve today's problems. "If you think we need to build 'international digital public goods' to head off the future risk... then let us agree that the prototype for that project is the 'international digital public goods' we need right now," Doctorow argues. This side-bet reframes the entire movement: fighting the corporation is the only way to prepare for any future AI threat.

Bottom Line

Doctorow's most powerful move is redefining the "enemy" from a hypothetical future AI to the current corporate structure, making the call to action immediate and actionable. The argument's strength lies in its refusal to engage with speculative fear-mongering, yet its vulnerability is the assumption that the political will exists to dismantle the very corporations that currently hold the keys to our digital infrastructure.

Deep Dives

Explore these related deep dives:

  • The Age of Surveillance Capitalism Amazon · Better World Books by Shoshana Zuboff

    How tech companies turned human experience into raw material for prediction and control.

  • The Master Algorithm Amazon · Better World Books by Pedro Domingos

    The quest for the universal learning algorithm that will reshape civilization.

  • Automating Inequality Amazon · Better World Books by Virginia Eubanks

    How high-tech tools profile, police, and punish the poor.

  • Pascal's wager

    The article adapts this 17th-century theological argument about betting on God's existence into a secular framework for deciding whether to fear AI, despite the author's skepticism about AI's actual intelligence.

  • Instrumental convergence

    The article's title metaphor about being 'turned into paperclips' directly references this specific thought experiment by Nick Bostrom, which illustrates how a superintelligent agent with a trivial goal could inadvertently destroy humanity.

  • Enshittification

    This term, coined by the article's author Cory Doctorow, describes the specific lifecycle of digital platforms degrading their utility for users to extract value for shareholders, which the author argues is the real corporate threat rather than hypothetical AI sentience.

Sources

A Pascal’s wager for AI doomers

by Cory Doctorow · Pluralistic · Read full article

Today's links.

A Pascal's Wager for AI Doomers: We're already being turned into paperclips. Hey look at this: Delights to delectate. Object permanence: Every pirate ebook on the internet; Sun's "Open DRM"; Untranslatable words; Let's encrypt is encrypting; Boots ruined by hedge fund; Brussels terrorists' opsec; Copyrighted Klingon; Murder Offsets. Upcoming appearances: Toronto, San Francisco, London, Berlin, NYC, Hay-on-Wye, London, NYC. Recent appearances: Where I've been. Latest books: You keep readin' em, I'll keep writin' 'em. Upcoming books: Like I said, I'll keep writin' 'em. Colophon: All the rest.

A Pascal's Wager for AI Doomers (permalink).

Lest anyone accuse me of bargaining in bad faith here, let me start with this admission: I don't think AI is intelligent; nor do I think that the current (admittedly impressive) statistical techniques will lead to intelligence. I think worrying about what we'll do if AI becomes intelligent is at best a distraction and at worst a cynical marketing ploy:

https://locusmag.com/feature/cory-doctorow-full-employment/

Now, that said: among some of the "AI doomers," I recognize kindred spirits. I, too, worry about technologies controlled by corporations that have grown so powerful that they defy regulation. I worry about how those technologies are used against us, and about how the corporations that make them are fusing with authoritarian states to create a totalitarian nightmare. I worry that technology is used to spy on and immiserate workers.

I just don't think we need AI to do those things. I think we should already be worried about those things.

Last week, I had a version of this discussion in front of several hundred people at the Bronfman Lecture in Montreal, where I appeared with Astra Taylor and Yoshua Bengio (co-winner of the Turing Prize for his work creating the "deep learning" techniques powering today's AI surge), on a panel moderated by CBC Ideas host Nahlah Ayed:

https://www.eventbrite.ca/e/artificial-intelligence-the-ultimate-disrupter-tickets-1982706623885

It's safe to say that Bengio and I mostly disagree about AI. He's running an initiative called "Lawzero," whose goal is to create an international AI consortium that produces AI as a "digital public good" that is designed to be open, auditable, transparent and safe:

http://lawzero.org

Bengio said he'd started Lawzero because he was convinced that AI was going to get a lot more powerful, and, in the absence of some public-spirited version of AI, we would be subject to all kinds of manipulation and surveillance, and that the resulting chaos would present a ...