Freddie deBoer is done asking nicely. After years of watching otherwise sober journalists and commentators treat AI prophecy as settled fact, he's drawing a line in the sand: show me the damage, or stop telling me it's coming.
The Demand for Evidence
The argument is deceptively simple. deBoer wants present-tense proof, not future-tense forecasting. "I will take extreme claims about the consequences of artificial intelligence seriously when you can show them to me now," he writes. "I will not take claims about the consequences of AI seriously as long as they take the form of you telling me what you believe will happen in the future." The distinction matters. It separates empirical observation from what amounts to secular eschatology — a faith in technological rapture dressed up as analysis.
deBoer's frustration runs deeper than mere annoyance. He sees what he calls a "second-order meta-psychosis": an entire media ecosystem that has convinced itself skepticism is the mainstream position, when in his reading the opposite is true. Credulity dominates. Hostility greets anyone who suggests the world five years from now might look largely like the world today.
"Despite relentless reference to strawman leftist skeptics who are never quoted or named, the number of people in the media who are predicting an imminent and irrevocable fissure in human history vastly outnumber anyone expressing even moderate skepticism."
The names deBoer cites are familiar to anyone who follows the political commentary beat: Matt Yglesias, Ezra Klein, Ross Douthat, Derek Thompson. These are not fringe figures. They are institutional voices, and deBoer finds them all swept up in what he describes as mass hysteria. Even writers attempting distance from the craze, he argues, still carry its assumptions in their bones.
"If you removed otherwise-rational people from this schizoid petri dish of large-language-model hysteria we're living in right now, those regular, non-brainrotted people would helpfully inform us that we've all lost our mind, and we're living in mass hallucination."
The Anatomy of False Skepticism
What makes deBoer's critique sharper than a standard "AI hype is overblown" take is his insistence that most so-called skeptical positions aren't skeptical at all. They're just a softer variety of the same belief. Someone who says, "Sure, jobs won't exist in five years, but at least we won't all be in VR fantasy generators yet" isn't skeptical — they've merely lowered the bar for what counts as revolutionary.
True skepticism, deBoer argues, looks like this: "As with so many predictions of the future in the past, such as the wild predictions made by esteemed scientists concerning the Human Genome Project, predictions about artificial intelligence today are irresponsible, sensationalistic, and very unlikely to come true." That is the baseline. Tomorrow resembling today is always the safest assumption in history.
The burden of proof, he notes, has been inverted. "People making extraordinary claims are always the ones who face an extraordinary burden of proof," yet the commentators he names treat AGI arrival and post-work economies as default expectations, framing doubters as denialists who need to catch up.
A Wager on the Economy
Rather than keep rehashing arguments he finds exhausted, deBoer puts money — or the promise of it — behind his position. He's offering a $1,000 wager to Scott Alexander, the physician-blogger behind Astral Codex Ten and a well-known AI enthusiast, that the American economy will remain essentially normal through February 2029.
The bet is structured around concrete, measurable indicators: unemployment rates, labor force participation, gross domestic product growth, productivity figures, stock market valuations, corporate profit margins, wage levels for knowledge workers, wealth inequality metrics. If even one of roughly twenty conditions is violated, Alexander wins. If all hold, deBoer wins.
The thresholds are deliberately generous. An unemployment rate of 18 percent would still count as "normal" — well above the roughly 15 percent peak during the pandemic's worst months. GDP could swing 30 percent down or 35 percent up. S&P 500 valuations could drop 60 percent or rise 225 percent. These are not tight tolerances. They're designed to capture anything short of genuine civilizational disruption.
"If AI is truly about to revolutionize everything the way proponents claim, we should see massive economic disruption: widespread job losses, productivity explosions, collapsing wages in knowledge work, extreme wealth concentration." By anchoring the bet to published Bureau of Labor Statistics data rather than fuzzy concepts like artificial general intelligence or "the singularity," deBoer is trying to build a trap that can't be escaped through definitional gymnastics.
What the Bet Reveals
The wager's real interest isn't whether deBoer or Alexander collects. It's what the structure exposes about the AI debate itself. Enthusiasts have operated in a space where claims can be made and revised endlessly, where missed predictions are reframed rather than falsified. deBoer is trying to drag the conversation into a space where it can be measured.
He acknowledges the bet's clumsiness and invites counterproposals. He also admits he doesn't have the full amount available for escrow right now, which is a notable caveat for a wager meant to demonstrate conviction.
Critics might note that three years is a short window for evaluating technological transformation. The internet did not remake commerce in its first thirty-six months. Critics might also point out that economic indicators are lagging measures — by the time unemployment spikes or productivity surges, the disruption is already underway. And the bet's generous tolerances, while intended to protect against non-AI economic shocks, are so wide that they might count a genuinely disruptive period as "normal" anyway.
A broader counterargument, one that the work of Gary Marcus has repeatedly pressed, is that the empirical evidence question and the economic disruption question are related but not identical. AI might produce measurable real-world failures and hallucinations while still gradually reshaping labor markets in ways that don't show up as dramatic statistical breaks.
Bottom Line
deBoer is right that the media's burden of proof has inverted badly and that most "skeptical" AI commentary is just enthusiasm with a shrug. But betting that the economy won't look obviously different in three years proves less than he thinks — transformative technologies rarely announce themselves in quarterly labor statistics on schedule.