← Back to Library

AI 2027 Uber alles

In a landscape often dominated by cautious hedging, a new forecast from leading analysts suggests that artificial general intelligence could arrive by 2027, fundamentally reshaping the global economy within a single business cycle. Bentham's Bulldog defends this startling timeline against a wave of internet skepticism, arguing that dismissing exponential growth trends is a far greater error than trusting a model that feels uncomfortably fast.

The Arithmetic of Acceleration

The core of the argument rests on a specific, measurable metric: the maximum length of tasks an AI can perform. Bentham's Bulldog explains that this capability has been doubling roughly every seven months. "If you plot out a doubling every seven months, then over the course of 35 months (about three years) you get five doublings," they write. The implication is stark: within three years, systems capable of hours-long tasks could evolve into entities managing projects that take months or even years.

AI 2027 Uber alles

This is not a claim based on vague intuition but on the compounding nature of current trends. The author notes that algorithmic efficiency is improving three times per year, while available compute grows by a factor of 4.5 annually. When these forces combine, the trajectory shifts from incremental improvement to radical disruption. As Bentham's Bulldog puts it, "Whenever an increase is fast and exponential, it shoots from pretty good to unbelievably insane rather quickly." This framing forces the reader to confront the mathematics of Moore's law applied to cognitive tasks, a concept that echoes the rapid scaling seen in the Good Judgment Project's superforecaster models, where trend extrapolation often outperforms complex causal theories.

Critics might argue that physical constraints or energy limits will inevitably break this exponential curve before 2027. However, the author counters that assuming a hard stop without evidence is less rational than following the data until it breaks.

The Trap of Semantic Nitpicking

A significant portion of the commentary is dedicated to dismantling the criticism that the "AI 2027" label is misleading because the authors' median forecast is actually 2028. Bentham's Bulldog draws a sharp parallel to political slogans, asking if a group calling for police abolition is dishonest if they later admit they only want reform. "If you don't want to abolish the police, don't have your slogan be 'abolish the police,'" they note, before rejecting this analogy for the AI forecast.

The distinction lies in the probability distribution. The authors explicitly state that 2027 is the single most likely scenario, or the "mode," even if it doesn't cross the 50% threshold. Bentham's Bulldog highlights the transparency of the forecast, quoting the team's own footnote: "We disagree somewhat amongst ourselves about AI timelines; our median AGI arrival date is somewhat longer than what this scenario depicts. This scenario depicts something like our mode." The author argues that hiding behind a median to avoid the headline-grabbing mode would be the actual deception.

The present level of snark, scorn, and dismissal is not warranted when some of the best AI forecasters take the time to make what is, by all accounts, quite a good model.

The commentary also addresses personal attacks on the authors, specifically targeting Daniel Kokotajlo's past views on the self-sampling assumption and Eli Lifland's choice of publishing platform. Bentham's Bulldog treats these as distractions, noting that while the mathematical errors in the initial draft were corrected, the core logic remains robust. The focus on the authors' personal history rather than the model's output is framed as a failure of intellectual rigor.

Models in the Fog of Uncertainty

Perhaps the most vital section addresses the critique that speculative models are useless without empirical validation. A physicist cited in the piece argues that models need strong conceptual justifications before guiding decisions. Bentham's Bulldog pushes back, asserting that in high-uncertainty domains, trend-tracking is the best tool available. "Speculative forecasting shouldn't be held to the standards of physics, because physics is a vastly more certain enterprise," they argue.

The author illustrates this with a practical example: career planning. If an individual believes there is a greater than 50% chance their profession will be automated by the time they graduate, ignoring that risk is irrational. The model doesn't need to be perfect; it just needs to be the "best game in town." This aligns with the philosophy of superforecasters who rely on "hazy average[s] of okay models" rather than waiting for impossible certainty. The argument is bolstered by a comparison to the Rethink Priorities welfare range model, which produced extreme estimates for animal suffering that were initially dismissed but later found to be consistent with independent arguments about cognitive simplicity and valence.

Bottom Line

Bentham's Bulldog successfully reframes the debate from a critique of specific dates to a defense of exponential thinking, arguing that the discomfort we feel with the 2027 timeline is a psychological reaction to rapid change rather than a flaw in the data. The piece's greatest strength is its insistence that updating beliefs in light of new evidence—like the authors adjusting their probabilities—is a sign of a healthy model, not a broken one. However, the argument remains vulnerable to the possibility of unforeseen bottlenecks in hardware or energy that could abruptly halt the doubling trend. For the busy professional, the takeaway is clear: the window to influence the trajectory of artificial intelligence is closing faster than the consensus admits, and the cost of waiting for perfect certainty may be the loss of agency entirely.

Deep Dives

Explore these related deep dives:

  • The Good Judgment Project

    The article discusses Eli Lifland as 'one of the world's top forecasters' and mentions superforecaster techniques like trend-tracking. Understanding the methodology and research behind superforecasting provides crucial context for evaluating the AI 2027 predictions.

  • Anthropic Bias

    The article jokes about Daniel Kokotajlo endorsing the 'self-sampling assumption' in 2017. This is a specific philosophical principle in anthropic reasoning that readers would benefit from understanding, especially given its relevance to AI safety discussions.

  • Moore's law

    The article's core argument relies on exponential growth patterns (task length doubling every seven months). Moore's law is the foundational concept for understanding such technological exponential trends and their historical accuracy and limitations.

Sources

AI 2027 Uber alles

by Bentham's Bulldog · · Read full article

1 AI 2027.

AI 2027 is a model written up by Eli Lifland, one of the world’s top forecasters, Daniel Kokotajlo, who made a series of extremely accurate early predictions about AI, Scott Alexander, who I’m told has a blog or something, and various others. Its aim is to forecast how AI will go. The core claim of the forecast is that there is a reasonable probability that we will get AI that can do everything better than humans fairly soon, potentially in just a few years. The most likely scenario was AGI in 2027, while the median scenario was AGI in 2028.

Now, maybe that sounds outlandish. But remember, there has been a consistent trend where AI’s maximum task length has been doubling roughly every seven months. What does that mean? Take tasks that it takes humans some number of hours to perform on average. Then ask: can AIs do that task? AIs have been able to consistently do longer and longer tasks, so that about every seven months, AIs get the ability to do most tasks that take humans twice as long.

If you plot out a doubling every seven months, then over the course of 35 months (about three years) you get five doublings. Maximum task length increases by a factor of 32, and now, instead of consistently being able to do things that take a few hours, AI will be able to do things that take days. Double that a few more times, and you get the AI being able to do things that take months and then years. By that time, the AIs can do more programming of AIs, and will have far surpassed us.

Now, the AI 2027 people have a more sophisticated model. But hopefully the argument I’ve just given illustrates that something in the vicinity of AGI extremely soon is reasonable, even though the exact details are debatable. AI 2027 seems like a pretty decent forecast by some smart people and ought to be taken seriously.

Lots of people on the internet have taken to dunking on AI 2027. Some of these criticisms are reasonable. In their initial forecast, they made some subtle mathematical errors which, when corrected for, push their timeline back a bit. Another reasonable criticism is that Daniel Kokotajlo is #problematic because as of 2017, he endorsed the (VERY FAILED) self-sampling assumption and has YET to make a public ...