← Back to Library

I'm offering scott alexander a wager about AI's effects over the next three years

Freddie deBoer is done asking nicely. After years of watching otherwise sober journalists and commentators treat AI prophecy as settled fact, he's drawing a line in the sand: show me the damage, or stop telling me it's coming.

The Demand for Evidence

The argument is deceptively simple. deBoer wants present-tense proof, not future-tense forecasting. "I will take extreme claims about the consequences of artificial intelligence seriously when you can show them to me now," he writes. "I will not take claims about the consequences of AI seriously as long as they take the form of you telling me what you believe will happen in the future." The distinction matters. It separates empirical observation from what amounts to secular eschatology — a faith in technological rapture dressed up as analysis.

I'm offering scott alexander a wager about AI's effects over the next three years

deBoer's frustration runs deeper than mere annoyance. He sees what he calls a "second-order meta-psychosis": an entire media ecosystem that has convinced itself skepticism is the mainstream position, when in his reading the opposite is true. Credulity dominates. Hostility greets anyone who suggests the world five years from now might look largely like the world today.

"Despite relentless reference to strawman leftist skeptics who are never quoted or named, the number of people in the media who are predicting an imminent and irrevocable fissure in human history vastly outnumber anyone expressing even moderate skepticism."

The names deBoer cites are familiar to anyone who follows the political commentary beat: Matt Yglesias, Ezra Klein, Ross Douthat, Derek Thompson. These are not fringe figures. They are institutional voices, and deBoer finds them all swept up in what he describes as mass hysteria. Even writers attempting distance from the craze, he argues, still carry its assumptions in their bones.

"If you removed otherwise-rational people from this schizoid petri dish of large-language-model hysteria we're living in right now, those regular, non-brainrotted people would helpfully inform us that we've all lost our mind, and we're living in mass hallucination."

The Anatomy of False Skepticism

What makes deBoer's critique sharper than a standard "AI hype is overblown" take is his insistence that most so-called skeptical positions aren't skeptical at all. They're just a softer variety of the same belief. Someone who says, "Sure, jobs won't exist in five years, but at least we won't all be in VR fantasy generators yet" isn't skeptical — they've merely lowered the bar for what counts as revolutionary.

True skepticism, deBoer argues, looks like this: "As with so many predictions of the future in the past, such as the wild predictions made by esteemed scientists concerning the Human Genome Project, predictions about artificial intelligence today are irresponsible, sensationalistic, and very unlikely to come true." That is the baseline. Tomorrow resembling today is always the safest assumption in history.

The burden of proof, he notes, has been inverted. "People making extraordinary claims are always the ones who face an extraordinary burden of proof," yet the commentators he names treat AGI arrival and post-work economies as default expectations, framing doubters as denialists who need to catch up.

A Wager on the Economy

Rather than keep rehashing arguments he finds exhausted, deBoer puts money — or the promise of it — behind his position. He's offering a $1,000 wager to Scott Alexander, the physician-blogger behind Astral Codex Ten and a well-known AI enthusiast, that the American economy will remain essentially normal through February 2029.

The bet is structured around concrete, measurable indicators: unemployment rates, labor force participation, gross domestic product growth, productivity figures, stock market valuations, corporate profit margins, wage levels for knowledge workers, wealth inequality metrics. If even one of roughly twenty conditions is violated, Alexander wins. If all hold, deBoer wins.

The thresholds are deliberately generous. An unemployment rate of 18 percent would still count as "normal" — well above the roughly 15 percent peak during the pandemic's worst months. GDP could swing 30 percent down or 35 percent up. S&P 500 valuations could drop 60 percent or rise 225 percent. These are not tight tolerances. They're designed to capture anything short of genuine civilizational disruption.

"If AI is truly about to revolutionize everything the way proponents claim, we should see massive economic disruption: widespread job losses, productivity explosions, collapsing wages in knowledge work, extreme wealth concentration." By anchoring the bet to published Bureau of Labor Statistics data rather than fuzzy concepts like artificial general intelligence or "the singularity," deBoer is trying to build a trap that can't be escaped through definitional gymnastics.

What the Bet Reveals

The wager's real interest isn't whether deBoer or Alexander collects. It's what the structure exposes about the AI debate itself. Enthusiasts have operated in a space where claims can be made and revised endlessly, where missed predictions are reframed rather than falsified. deBoer is trying to drag the conversation into a space where it can be measured.

He acknowledges the bet's clumsiness and invites counterproposals. He also admits he doesn't have the full amount available for escrow right now, which is a notable caveat for a wager meant to demonstrate conviction.

Critics might note that three years is a short window for evaluating technological transformation. The internet did not remake commerce in its first thirty-six months. Critics might also point out that economic indicators are lagging measures — by the time unemployment spikes or productivity surges, the disruption is already underway. And the bet's generous tolerances, while intended to protect against non-AI economic shocks, are so wide that they might count a genuinely disruptive period as "normal" anyway.

A broader counterargument, one that the work of Gary Marcus has repeatedly pressed, is that the empirical evidence question and the economic disruption question are related but not identical. AI might produce measurable real-world failures and hallucinations while still gradually reshaping labor markets in ways that don't show up as dramatic statistical breaks.

Bottom Line

deBoer is right that the media's burden of proof has inverted badly and that most "skeptical" AI commentary is just enthusiasm with a shrug. But betting that the economy won't look obviously different in three years proves less than he thinks — transformative technologies rarely announce themselves in quarterly labor statistics on schedule.

Sources

I'm offering scott alexander a wager about AI's effects over the next three years

by Freddie deBoer · · Read full article

I have said it before, and I will say it again: I will take extreme claims about the consequences of “artificial intelligence” seriously when you can show them to me now. I will not take claims about the consequences of AI seriously as long as they take the form of you telling me what you believe will happen in the future. I will seriously entertain evidence-backed observations, not speculative predictions. That’s it. That’s the rule; that’s the law. That’s the ethic, the discipline, the mantra, the creed, the holy book, the catechism. Show me what AI is currently doing. Show me! I’m putting down my marker here because I’d like to get out of the AI discourse business for at least a year - it’s thankless and pointless - so let me please leave you with that as a suggestion for how to approach AI stories moving forward. Show, don’t tell, prove, don’t predict.

There are several different kinds of AI psychosis going on right now. The big one is, well, everyone has lost their fucking minds about AI, in a way I find truly disturbing. Another one that I have not seen anyone really comment on is a kind of second-order meta-psychosis: people keep talking about a media world that’s full of AI skepticism (often “leftist AI skeptics”) when, in fact, a vast majority of people in media have accepted wild predictions about AI forever altering human existence, imminently, for which they can provide no material evidence whatsoever. I read things by people in the AI development world itself, I read tech and gadget media people, I read business journalists, I read polemicists, I read wonks, I read liberals, I read conservatives, I read AI-generated summaries that Google flashes in front of my face against my will, I trawl through the comments sections, I watch YouTube videos, I listen to podcasts - the notion that the media, or the discourse, or the public consciousness is generally skeptical is totally foreign to me. I don’t know what planet you guys are living on. Yes, there are a handful of well-known skeptics like Gary Marcus. They are absolutely dwarfed by the number of people who think AI is going to forever change the fundamentals of human existence and quite soon. I honestly think people are developing this idea of an army of skeptics from random attitudes they encounter on social media, ...