In a landscape often dominated by cautious hedging, a new forecast from leading analysts suggests that artificial general intelligence could arrive by 2027, fundamentally reshaping the global economy within a single business cycle. Bentham's Bulldog defends this startling timeline against a wave of internet skepticism, arguing that dismissing exponential growth trends is a far greater error than trusting a model that feels uncomfortably fast.
The Arithmetic of Acceleration
The core of the argument rests on a specific, measurable metric: the maximum length of tasks an AI can perform. Bentham's Bulldog explains that this capability has been doubling roughly every seven months. "If you plot out a doubling every seven months, then over the course of 35 months (about three years) you get five doublings," they write. The implication is stark: within three years, systems capable of hours-long tasks could evolve into entities managing projects that take months or even years.
This is not a claim based on vague intuition but on the compounding nature of current trends. The author notes that algorithmic efficiency is improving three times per year, while available compute grows by a factor of 4.5 annually. When these forces combine, the trajectory shifts from incremental improvement to radical disruption. As Bentham's Bulldog puts it, "Whenever an increase is fast and exponential, it shoots from pretty good to unbelievably insane rather quickly." This framing forces the reader to confront the mathematics of Moore's law applied to cognitive tasks, a concept that echoes the rapid scaling seen in the Good Judgment Project's superforecaster models, where trend extrapolation often outperforms complex causal theories.
Critics might argue that physical constraints or energy limits will inevitably break this exponential curve before 2027. However, the author counters that assuming a hard stop without evidence is less rational than following the data until it breaks.
The Trap of Semantic Nitpicking
A significant portion of the commentary is dedicated to dismantling the criticism that the "AI 2027" label is misleading because the authors' median forecast is actually 2028. Bentham's Bulldog draws a sharp parallel to political slogans, asking if a group calling for police abolition is dishonest if they later admit they only want reform. "If you don't want to abolish the police, don't have your slogan be 'abolish the police,'" they note, before rejecting this analogy for the AI forecast.
The distinction lies in the probability distribution. The authors explicitly state that 2027 is the single most likely scenario, or the "mode," even if it doesn't cross the 50% threshold. Bentham's Bulldog highlights the transparency of the forecast, quoting the team's own footnote: "We disagree somewhat amongst ourselves about AI timelines; our median AGI arrival date is somewhat longer than what this scenario depicts. This scenario depicts something like our mode." The author argues that hiding behind a median to avoid the headline-grabbing mode would be the actual deception.
The present level of snark, scorn, and dismissal is not warranted when some of the best AI forecasters take the time to make what is, by all accounts, quite a good model.
The commentary also addresses personal attacks on the authors, specifically targeting Daniel Kokotajlo's past views on the self-sampling assumption and Eli Lifland's choice of publishing platform. Bentham's Bulldog treats these as distractions, noting that while the mathematical errors in the initial draft were corrected, the core logic remains robust. The focus on the authors' personal history rather than the model's output is framed as a failure of intellectual rigor.
Models in the Fog of Uncertainty
Perhaps the most vital section addresses the critique that speculative models are useless without empirical validation. A physicist cited in the piece argues that models need strong conceptual justifications before guiding decisions. Bentham's Bulldog pushes back, asserting that in high-uncertainty domains, trend-tracking is the best tool available. "Speculative forecasting shouldn't be held to the standards of physics, because physics is a vastly more certain enterprise," they argue.
The author illustrates this with a practical example: career planning. If an individual believes there is a greater than 50% chance their profession will be automated by the time they graduate, ignoring that risk is irrational. The model doesn't need to be perfect; it just needs to be the "best game in town." This aligns with the philosophy of superforecasters who rely on "hazy average[s] of okay models" rather than waiting for impossible certainty. The argument is bolstered by a comparison to the Rethink Priorities welfare range model, which produced extreme estimates for animal suffering that were initially dismissed but later found to be consistent with independent arguments about cognitive simplicity and valence.
Bottom Line
Bentham's Bulldog successfully reframes the debate from a critique of specific dates to a defense of exponential thinking, arguing that the discomfort we feel with the 2027 timeline is a psychological reaction to rapid change rather than a flaw in the data. The piece's greatest strength is its insistence that updating beliefs in light of new evidence—like the authors adjusting their probabilities—is a sign of a healthy model, not a broken one. However, the argument remains vulnerable to the possibility of unforeseen bottlenecks in hardware or energy that could abruptly halt the doubling trend. For the busy professional, the takeaway is clear: the window to influence the trajectory of artificial intelligence is closing faster than the consensus admits, and the cost of waiting for perfect certainty may be the loss of agency entirely.