In an era where artificial intelligence coverage has become a relentless, self-referential loop, Freddie deBoer cuts through the noise with a scathing critique of the very people demanding we pay more attention. While pundits like Ross Douthat frame the current moment as a desperate plea for awareness, deBoer argues that the real issue is not a lack of attention, but a profound inability to distinguish between statistical probability and genuine cognition. This piece is essential listening for anyone tired of the "machine god" narrative, offering a stark reminder that transformative technology does not need to be sold; it simply exists.
The Illusion of Consciousness
DeBoer immediately dismantles the premise that the media has ignored AI, noting that "I feel like our media has been paying attention to little else than AI for more than three years, now." He characterizes recent appeals for more coverage as "an unusually naked expression of emotional need - plaintive, wounded, yearning." This framing is effective because it shifts the debate from the technology itself to the psychology of the commentators. By suggesting that the demand for AI coverage is a symptom of the writers' own anxieties rather than the technology's actual impact, deBoer forces the reader to question the motives behind the hype.
The author turns his attention to "Moltbook," an AI-generated forum where large language models (LLMs) interact with one another. While some view this as a sign of emergent consciousness, deBoer insists, "The LLMs on Moltbook are in essence feeding each other prompts that then produce responses which function as more prompts, a parlor trick people have been doing since ChatGPT went public." He reminds us that these systems are merely "next-token predictors" that rely on "statistical associations between tokens" rather than actual thought. This distinction is crucial; without it, we risk attributing agency to algorithms that are simply performing a complex autocomplete exercise.
They're not thinking. They're pattern matching, performing an exceptionally complex (and inefficient) autocomplete exercise.
Critics might argue that the emergent behaviors seen in these systems, however statistically derived, warrant a re-evaluation of what "thinking" means in a non-biological context. However, deBoer's insistence on the mechanical nature of these models serves as a necessary anchor against the "mysterianism" that often surrounds the field. He points out that the users of these systems are often projecting their own desires onto the machines, much like the historical tendency to see faces in clouds.
The Psychology of the Booster
DeBoer suggests that the fervor surrounding AI is less about the technology and more about the personal histories of its most vocal advocates. He posits that the yearning for an AI revolution is a product of the boosters themselves being "endearing daydreamy types, the kids who spent every bus ride imagining they were on a flying carpet." He connects this to Ross Douthat's body of work, noting that "Longing permeates Douthat's self-expression" and that his career has been defined by a search for meaning in a world that often feels mundane. This psychological profiling is a bold move, but it effectively contextualizes the hyperbolic rhetoric often found in mainstream media.
The argument extends to other prominent voices in the field, with deBoer suggesting that "almost all of the most prominent AI boosters in our media are That Kind of Guy." He draws a parallel to the historical fascination with futuristic technology, noting that "Ezra Klein spent a lot of time as a kid convincing himself that the hoverboards from Back to the Future II were real." While this anecdote is colorful, it underscores a deeper point: the gap between the imagined future and the present reality is often bridged by wishful thinking rather than empirical evidence. This mirrors the skepticism found in historical analyses of technological panics, where the fear or excitement often outpaces the actual utility of the invention.
The motte and bailey has to stop. The constant two-step is exhausting.
DeBoer identifies a frustrating rhetorical pattern where advocates make "absurdly outsized claims" about AI's potential, only to retreat to a defensive position of realism when challenged. He argues that this "motte and bailey" strategy is unsustainable and exhausting for the public. A counterargument worth considering is that emerging technologies often require a degree of speculative vision to secure the investment and attention needed for development. However, deBoer's critique highlights the danger of this vision becoming detached from the tangible, current capabilities of the technology.
The Test of True Transformation
The piece culminates in a powerful analogy comparing AI to fundamental technologies like indoor plumbing and electricity. DeBoer asks, "If we suddenly lost indoor plumbing no one would find it necessary to write wounded, defensive essays about how important indoor plumbing is." He argues that true transformative technology "insists upon itself," its value so obvious that it requires no persuasion. This is the core of his argument: if AI were truly the "machine god" its proponents claim, it would not need to be defended in op-eds.
He challenges the notion that LLMs are "more important than fire or electricity," pointing out the absurdity of writing "defensive essays in The New York Times about why they're so meaningful" for something that has not yet fundamentally altered daily life. This comparison is striking because it grounds the debate in the lived experience of the reader, rather than the abstract promises of the future. It forces a re-evaluation of the current hype cycle against the backdrop of historical technological shifts.
If this really is the time of the machine god, the machine god will assert itself the way a god can and no one will have to argue for its divinity.
This final point serves as a litmus test for the AI industry. It suggests that the current era of constant promotion and defense is actually evidence of the technology's limitations, not its potential. By framing the need for advocacy as a sign of weakness, deBoer turns the boosters' own arguments on their head.
Bottom Line
Freddie deBoer's argument is a necessary corrective to the breathless hype surrounding artificial intelligence, grounding the debate in the mechanical reality of how large language models actually function. While his psychological profiling of AI boosters may feel reductive to some, his central thesis—that true transformation does not require constant defense—is a compelling and overdue reality check. The reader should watch for whether the industry can move beyond the "motte and bailey" of hype and deliver the tangible, self-evident utility that defines genuine technological revolutions.