Freddie deBoer cuts through the noise of the modern tech cycle with a diagnosis that feels uncomfortably personal: we are collectively hallucinating technological progress because the alternative is too economically and psychologically devastating to accept. By anchoring a sweeping critique of artificial intelligence and media hype in the mundane frustration of a father trying to dictate a text message while holding a baby, deBoer exposes a dangerous gap between the shimmering promises of the future and the janky reality of the present. This is not just a complaint about bad software; it is an analysis of why we refuse to admit that the technology isn't working, even when it fails to distinguish between "brain jelly" and "baby wipes."
The Illusion of Maturity
DeBoer begins by dismantling the assumption that basic tools like voice-to-text have reached a state of reliability. He describes testing the feature across expensive flagship devices from Samsung, Google, and Apple, only to find the technology "janky, inconsistent, and a constant source of absurd errors." The author argues that society has made a quiet deal to pretend these tools function correctly, treating glitches as charming quirks rather than systemic failures. This framing is potent because it starts with the universal experience of the user, making the subsequent critique of the industry feel earned rather than abstract.
"We've all quietly agreed to pretend this stuff works. But it doesn't work. And if we can't get a phone to correctly hear 'bring wipes' instead of 'brain wives,' then maybe, just maybe, we're not ready for all sorts of other incredible technological feats that are perpetually just around the corner."
The argument extends to the realm of real-time machine translation, a feature heavily marketed by Apple and covered uncritically by the tech press. DeBoer notes that the technology is fundamentally broken because the initial step—converting speech to text—is already failing. He points out that the media acts as an "unpaid marketing arm" for these companies, creating a feedback loop where the promise of the product becomes indistinguishable from the product itself. This dynamic mirrors the historical pattern of the "Great Stagnation," where the narrative of exponential growth persists despite a half-century of actual technological plateau. Just as the early optimism of the internet era masked a period of incrementalism, the current AI boom is built on a similar suspension of disbelief.
Critics might argue that dismissing current capabilities as "broken" ignores the rapid pace of iterative improvement, where today's errors are merely the growing pains of tomorrow's ubiquity. However, deBoer's point is that the pace of the hype has outstripped the pace of the reality, creating a dangerous disconnect.
The Economics of Hype
The core of deBoer's critique shifts from user experience to the structural incentives of the media and financial industries. He posits that tech journalism has a "permanent slippage between what's real and what's promised," a feature rather than a bug of the current business model. In an era of clickbait and ad-revenue dependence, there is no financial incentive to report that a new gadget is half-broken or that a revolutionary AI is merely a sophisticated autocomplete tool.
"The boring truth about most new gadgets - that they're half-broken, wildly overpromised, and ultimately exist to raise stock prices rather than rake in revenue - is not what gets clicks. So the coverage bends toward fantasy."
This section is particularly sharp in its analysis of why the industry refuses to admit failure. DeBoer suggests that the entire economy has pinned its narrative on the idea that AI is the savior of a stagnant world. To admit that the technology is flawed is to admit that there is no other engine for growth left. The author describes this as "the tail wagging the dog," where the desperate need for technology to advance becomes the only evidence we have that it is actually advancing.
"It's not that people looked at the evidence and concluded AI will save us; it's that people looked at the stagnation everywhere else and decided it must save us."
The commentary here is compelling because it reframes the "AI winter" not as a technical failure, but as a collective psychological defense mechanism. We are not ignoring the glitches because we are stupid; we are ignoring them because acknowledging them would require confronting a bleak economic reality. As deBoer puts it, "Belief is what sells. Belief is what keeps the machine running."
The Cult of the Future
DeBoer concludes by warning that this mass suspension of disbelief is not harmless. When we treat hallucinations as creativity and errors as features, we risk deploying unreliable systems in high-stakes environments like aviation, healthcare, and warfare. The author draws a parallel to the 1990s internet boom and the 2000s mobile revolution, noting that each cycle followed the same trajectory: breathless hype, inflated expectations, and eventual disillusionment. The difference now, he argues, is that the financial and cultural stakes are so high that we are trying to skip the disillusionment phase entirely.
"We are communally manifesting belief in AI the way a cult manifests belief in the apocalypse: not because the signs are convincing, but because the alternative is too depressing to contemplate."
This is the piece's most striking insight: the danger lies not in the technology itself, but in the refusal to let it be merely a tool. By demanding that AI be a miracle, we set ourselves up for a crash that could be far more severe than previous cycles. The author warns that when reality finally arrives, the same voices that promised a sentient toaster will simply rewrite history to avoid admitting they were wrong, kicking the can down the road to the next shiny object.
"Reality always shows up eventually. And when it does, the same people who told you your toaster would soon be sentient will write long essays about what went wrong that conspicuously avoid actually admitting that they were wrong."
Bottom Line
Freddie deBoer's argument is a necessary corrective to the breathless optimism that dominates the current tech landscape, effectively linking the failure of basic voice recognition to the broader economic desperation driving the AI hype cycle. While the piece risks underestimating the genuine, albeit narrow, utility of current large language models, its strongest contribution is exposing the structural incentives that prevent honest assessment of technological readiness. Readers should watch for the inevitable moment when the "shimmer" of the next big thing fades, revealing the same janky reality that has been there all along.
"We are communally manifesting belief in AI the way a cult manifests belief in the apocalypse: not because the signs are convincing, but because the alternative is too depressing to contemplate."
Bottom Line
Freddie deBoer's argument is a necessary corrective to the breathless optimism that dominates the current tech landscape, effectively linking the failure of basic voice recognition to the broader economic desperation driving the AI hype cycle. While the piece risks underestimating the genuine, albeit narrow, utility of current large language models, its strongest contribution is exposing the structural incentives that prevent honest assessment of technological readiness. Readers should watch for the inevitable moment when the "shimmer" of the next big thing fades, revealing the same janky reality that has been there all along.