← Back to Library

The machine God's existence would insist upon itself, wouldn't it?

In an era where artificial intelligence coverage has become a relentless, self-referential loop, Freddie deBoer cuts through the noise with a scathing critique of the very people demanding we pay more attention. While pundits like Ross Douthat frame the current moment as a desperate plea for awareness, deBoer argues that the real issue is not a lack of attention, but a profound inability to distinguish between statistical probability and genuine cognition. This piece is essential listening for anyone tired of the "machine god" narrative, offering a stark reminder that transformative technology does not need to be sold; it simply exists.

The Illusion of Consciousness

DeBoer immediately dismantles the premise that the media has ignored AI, noting that "I feel like our media has been paying attention to little else than AI for more than three years, now." He characterizes recent appeals for more coverage as "an unusually naked expression of emotional need - plaintive, wounded, yearning." This framing is effective because it shifts the debate from the technology itself to the psychology of the commentators. By suggesting that the demand for AI coverage is a symptom of the writers' own anxieties rather than the technology's actual impact, deBoer forces the reader to question the motives behind the hype.

The machine God's existence would insist upon itself, wouldn't it?

The author turns his attention to "Moltbook," an AI-generated forum where large language models (LLMs) interact with one another. While some view this as a sign of emergent consciousness, deBoer insists, "The LLMs on Moltbook are in essence feeding each other prompts that then produce responses which function as more prompts, a parlor trick people have been doing since ChatGPT went public." He reminds us that these systems are merely "next-token predictors" that rely on "statistical associations between tokens" rather than actual thought. This distinction is crucial; without it, we risk attributing agency to algorithms that are simply performing a complex autocomplete exercise.

They're not thinking. They're pattern matching, performing an exceptionally complex (and inefficient) autocomplete exercise.

Critics might argue that the emergent behaviors seen in these systems, however statistically derived, warrant a re-evaluation of what "thinking" means in a non-biological context. However, deBoer's insistence on the mechanical nature of these models serves as a necessary anchor against the "mysterianism" that often surrounds the field. He points out that the users of these systems are often projecting their own desires onto the machines, much like the historical tendency to see faces in clouds.

The Psychology of the Booster

DeBoer suggests that the fervor surrounding AI is less about the technology and more about the personal histories of its most vocal advocates. He posits that the yearning for an AI revolution is a product of the boosters themselves being "endearing daydreamy types, the kids who spent every bus ride imagining they were on a flying carpet." He connects this to Ross Douthat's body of work, noting that "Longing permeates Douthat's self-expression" and that his career has been defined by a search for meaning in a world that often feels mundane. This psychological profiling is a bold move, but it effectively contextualizes the hyperbolic rhetoric often found in mainstream media.

The argument extends to other prominent voices in the field, with deBoer suggesting that "almost all of the most prominent AI boosters in our media are That Kind of Guy." He draws a parallel to the historical fascination with futuristic technology, noting that "Ezra Klein spent a lot of time as a kid convincing himself that the hoverboards from Back to the Future II were real." While this anecdote is colorful, it underscores a deeper point: the gap between the imagined future and the present reality is often bridged by wishful thinking rather than empirical evidence. This mirrors the skepticism found in historical analyses of technological panics, where the fear or excitement often outpaces the actual utility of the invention.

The motte and bailey has to stop. The constant two-step is exhausting.

DeBoer identifies a frustrating rhetorical pattern where advocates make "absurdly outsized claims" about AI's potential, only to retreat to a defensive position of realism when challenged. He argues that this "motte and bailey" strategy is unsustainable and exhausting for the public. A counterargument worth considering is that emerging technologies often require a degree of speculative vision to secure the investment and attention needed for development. However, deBoer's critique highlights the danger of this vision becoming detached from the tangible, current capabilities of the technology.

The Test of True Transformation

The piece culminates in a powerful analogy comparing AI to fundamental technologies like indoor plumbing and electricity. DeBoer asks, "If we suddenly lost indoor plumbing no one would find it necessary to write wounded, defensive essays about how important indoor plumbing is." He argues that true transformative technology "insists upon itself," its value so obvious that it requires no persuasion. This is the core of his argument: if AI were truly the "machine god" its proponents claim, it would not need to be defended in op-eds.

He challenges the notion that LLMs are "more important than fire or electricity," pointing out the absurdity of writing "defensive essays in The New York Times about why they're so meaningful" for something that has not yet fundamentally altered daily life. This comparison is striking because it grounds the debate in the lived experience of the reader, rather than the abstract promises of the future. It forces a re-evaluation of the current hype cycle against the backdrop of historical technological shifts.

If this really is the time of the machine god, the machine god will assert itself the way a god can and no one will have to argue for its divinity.

This final point serves as a litmus test for the AI industry. It suggests that the current era of constant promotion and defense is actually evidence of the technology's limitations, not its potential. By framing the need for advocacy as a sign of weakness, deBoer turns the boosters' own arguments on their head.

Bottom Line

Freddie deBoer's argument is a necessary corrective to the breathless hype surrounding artificial intelligence, grounding the debate in the mechanical reality of how large language models actually function. While his psychological profiling of AI boosters may feel reductive to some, his central thesis—that true transformation does not require constant defense—is a compelling and overdue reality check. The reader should watch for whether the industry can move beyond the "motte and bailey" of hype and deliver the tangible, self-evident utility that defines genuine technological revolutions.

Sources

The machine God's existence would insist upon itself, wouldn't it?

by Freddie deBoer · · Read full article

Big announcement coming tomorrow morning, then a subscriber-only post on the contemporary world’s endless search for victims on Wednesday.

“Pay More Attention to AI,” reads the headline of this Ross Douthat piece, an unusually naked expression of emotional need - plaintive, wounded, yearning. It’s funny because I feel like our media has been paying attention to little else than AI for more than three years, now. Ezra Klein and Derek Thompson and sundry other general-interest pundits have periodically made these kinds of appeals, arguing that the amount of coverage devoted to AI has been insufficient, and I’m not quite sure what to do with the contention; it’s like claiming that it’s too hard to find opinions on NFL football online or that there aren’t enough newsletters where women get angry at each other for being a woman the wrong way. I would think it would go without saying that our cup runneth over, when it comes to AI. But it’s a free country!

Douthat becomes the latest to nominate this Moltbook thing as a sign of some sort of transformative moment in AI.

if you think all this is merely hype, if you’re sure the tales of discovery are mostly flimflam and what’s been discovered is a small island chain at best, I would invite you to spend a little time on Moltbook, an A.I.-generated forum where new-model A.I. agents talk to one another, debate consciousness, invent religions, strategize about concealment from humans and more.

I find this strange. We already know that LLMs can talk to each other. Any use of LLMs that produces impressively polished text in response to a prompt shouldn’t be particularly surprising. The LLMs on Moltbook are in essence feeding each other prompts that then produce responses which function as more prompts, a parlor trick people have been doing since ChatGPT went public and in fact long before. (Remember Dr. Sbaitso?)

The question is whether the systems connecting on Moltbook are actually thinking or feeling, and we know the answer to that - no, they neither think nor feel. They’re acting as next-token predictors that respond to prompts by running them through models developed through the ingestion of massive amounts of data and trained on billions of parameters, using statistical associations between tokens in their datasets to predict which next immediate token would be most likely to produce a response that seems like a plausible ...