← Back to Library

Things that don't work and the tail wagging the dog

Freddie deBoer cuts through the noise of the modern tech cycle with a diagnosis that feels uncomfortably personal: we are collectively hallucinating technological progress because the alternative is too economically and psychologically devastating to accept. By anchoring a sweeping critique of artificial intelligence and media hype in the mundane frustration of a father trying to dictate a text message while holding a baby, deBoer exposes a dangerous gap between the shimmering promises of the future and the janky reality of the present. This is not just a complaint about bad software; it is an analysis of why we refuse to admit that the technology isn't working, even when it fails to distinguish between "brain jelly" and "baby wipes."

The Illusion of Maturity

DeBoer begins by dismantling the assumption that basic tools like voice-to-text have reached a state of reliability. He describes testing the feature across expensive flagship devices from Samsung, Google, and Apple, only to find the technology "janky, inconsistent, and a constant source of absurd errors." The author argues that society has made a quiet deal to pretend these tools function correctly, treating glitches as charming quirks rather than systemic failures. This framing is potent because it starts with the universal experience of the user, making the subsequent critique of the industry feel earned rather than abstract.

"We've all quietly agreed to pretend this stuff works. But it doesn't work. And if we can't get a phone to correctly hear 'bring wipes' instead of 'brain wives,' then maybe, just maybe, we're not ready for all sorts of other incredible technological feats that are perpetually just around the corner."

The argument extends to the realm of real-time machine translation, a feature heavily marketed by Apple and covered uncritically by the tech press. DeBoer notes that the technology is fundamentally broken because the initial step—converting speech to text—is already failing. He points out that the media acts as an "unpaid marketing arm" for these companies, creating a feedback loop where the promise of the product becomes indistinguishable from the product itself. This dynamic mirrors the historical pattern of the "Great Stagnation," where the narrative of exponential growth persists despite a half-century of actual technological plateau. Just as the early optimism of the internet era masked a period of incrementalism, the current AI boom is built on a similar suspension of disbelief.

Things that don't work and the tail wagging the dog

Critics might argue that dismissing current capabilities as "broken" ignores the rapid pace of iterative improvement, where today's errors are merely the growing pains of tomorrow's ubiquity. However, deBoer's point is that the pace of the hype has outstripped the pace of the reality, creating a dangerous disconnect.

The Economics of Hype

The core of deBoer's critique shifts from user experience to the structural incentives of the media and financial industries. He posits that tech journalism has a "permanent slippage between what's real and what's promised," a feature rather than a bug of the current business model. In an era of clickbait and ad-revenue dependence, there is no financial incentive to report that a new gadget is half-broken or that a revolutionary AI is merely a sophisticated autocomplete tool.

"The boring truth about most new gadgets - that they're half-broken, wildly overpromised, and ultimately exist to raise stock prices rather than rake in revenue - is not what gets clicks. So the coverage bends toward fantasy."

This section is particularly sharp in its analysis of why the industry refuses to admit failure. DeBoer suggests that the entire economy has pinned its narrative on the idea that AI is the savior of a stagnant world. To admit that the technology is flawed is to admit that there is no other engine for growth left. The author describes this as "the tail wagging the dog," where the desperate need for technology to advance becomes the only evidence we have that it is actually advancing.

"It's not that people looked at the evidence and concluded AI will save us; it's that people looked at the stagnation everywhere else and decided it must save us."

The commentary here is compelling because it reframes the "AI winter" not as a technical failure, but as a collective psychological defense mechanism. We are not ignoring the glitches because we are stupid; we are ignoring them because acknowledging them would require confronting a bleak economic reality. As deBoer puts it, "Belief is what sells. Belief is what keeps the machine running."

The Cult of the Future

DeBoer concludes by warning that this mass suspension of disbelief is not harmless. When we treat hallucinations as creativity and errors as features, we risk deploying unreliable systems in high-stakes environments like aviation, healthcare, and warfare. The author draws a parallel to the 1990s internet boom and the 2000s mobile revolution, noting that each cycle followed the same trajectory: breathless hype, inflated expectations, and eventual disillusionment. The difference now, he argues, is that the financial and cultural stakes are so high that we are trying to skip the disillusionment phase entirely.

"We are communally manifesting belief in AI the way a cult manifests belief in the apocalypse: not because the signs are convincing, but because the alternative is too depressing to contemplate."

This is the piece's most striking insight: the danger lies not in the technology itself, but in the refusal to let it be merely a tool. By demanding that AI be a miracle, we set ourselves up for a crash that could be far more severe than previous cycles. The author warns that when reality finally arrives, the same voices that promised a sentient toaster will simply rewrite history to avoid admitting they were wrong, kicking the can down the road to the next shiny object.

"Reality always shows up eventually. And when it does, the same people who told you your toaster would soon be sentient will write long essays about what went wrong that conspicuously avoid actually admitting that they were wrong."

Bottom Line

Freddie deBoer's argument is a necessary corrective to the breathless optimism that dominates the current tech landscape, effectively linking the failure of basic voice recognition to the broader economic desperation driving the AI hype cycle. While the piece risks underestimating the genuine, albeit narrow, utility of current large language models, its strongest contribution is exposing the structural incentives that prevent honest assessment of technological readiness. Readers should watch for the inevitable moment when the "shimmer" of the next big thing fades, revealing the same janky reality that has been there all along.

"We are communally manifesting belief in AI the way a cult manifests belief in the apocalypse: not because the signs are convincing, but because the alternative is too depressing to contemplate."

Bottom Line

Freddie deBoer's argument is a necessary corrective to the breathless optimism that dominates the current tech landscape, effectively linking the failure of basic voice recognition to the broader economic desperation driving the AI hype cycle. While the piece risks underestimating the genuine, albeit narrow, utility of current large language models, its strongest contribution is exposing the structural incentives that prevent honest assessment of technological readiness. Readers should watch for the inevitable moment when the "shimmer" of the next big thing fades, revealing the same janky reality that has been there all along.

Deep Dives

Explore these related deep dives:

  • Speech recognition

    The article's central complaint is about voice-to-text technology failing despite decades of development. This Wikipedia article covers the technical history, challenges, and limitations that explain why this seemingly 'mature' technology still produces errors like 'brain jelly baby'

  • The Great Stagnation

    The author explicitly references living in a 'half-century-plus long period of technological stagnation'—this is Tyler Cowen's thesis about slowing innovation, which provides the intellectual framework underlying the article's argument about overhyped technology

Sources

Things that don't work and the tail wagging the dog

by Freddie deBoer · · Read full article

I have a piece out for the Times of London about what Mamdani should do now that he's won. Check it out.

As a man with a seven month old baby, I spend a lot of time typing with one hand, or no hands, while holding a bottle clutched between my neck cheek and my shoulder in just exactly the way we used to do with old corded telephones. Babies strain a lot of things - and, yes, he is still severely straining our capacity to operate without sleep - and one thing my Junho has definitely strained is my ability to do various basic life tasks with my hands full.

Which means I rely heavily on voice-to-text, or speaking into my phone and having it turn my speech into writing - usually for text messages, occasionally for emails, and too often for argumentative Substack Notes. Now voice-to-text is not a technology that’s treated as some sort of vanguard tech, early adopter tech; in fact it’s the kind of affordance that most of us just casually assume is mature and reliable. And yet! I have tested this proposition across multiple empires of consumer electronics - a Samsung Galaxy, a Google Pixel, and my wife’s iPhone, all of them expensive flagship phones - and I can confidently report that in all cases the voice-to-text tech is janky, inconsistent, and a constant source of absurd errors. My friends and family are used to receiving texts like “buy more brain jelly baby later?” or “goat not sleep now why” - both of those are real examples, for the record - and they just nod along, like, “Poor thing, just gets no sleep with his baby, he’s losing his mind.” And, you know, it’s fine. They are patient with me and I am apologetic to them. That’s kind of the societal deal we’ve made with so many technologies: we’ve all quietly agreed to pretend this stuff works. But it doesn’t work. And if we can’t get a phone to correctly hear “bring wipes” instead of “brain wives,” then maybe, just maybe, we’re not ready for all sorts of other incredible technological feats that are perpetually just around the corner.

Take recent assurances that we have reached the magical age of real-time machine translation - you talk, “AI” (algorithms) translate what you say for the other person, they speak, they do the same for you. ...