← Back to Library

The next decades will plausibly be completely insane

Most commentary on artificial intelligence fixates on whether machines will become sentient or how quickly they will take our jobs. Bentham's Bulldog, however, argues we are missing the forest for the trees: the real danger isn't just that AI will be smart, but that the world will be utterly unprepared for the sheer velocity of the change. The piece posits a scenario where we compress three centuries of technological evolution into a single decade, creating a reality that is "completely insane" by any historical standard.

The Mathematics of the Explosion

Bentham's Bulldog anchors the argument in a startling statistical reality: while human research capacity grows at a modest 5% annually, the number of AI models we can deploy is expanding 25 times per year. "The number of AI models we can run is increasing 25x per year, 500X as fast as growth in number of human researchers," the author writes. This isn't just a faster car; it's a different mode of transportation entirely. The logic here is compelling because it relies on multiple, independent growth vectors—compute power, algorithmic efficiency, and inference costs—all trending upward simultaneously. Even if one trend stalls, the others carry the momentum.

The next decades will plausibly be completely insane

The author draws a parallel to the concept of the "intelligence explosion" first formalized by I. J. Good in 1965, suggesting that once AI reaches human parity, it won't stop. Instead, it will enter a feedback loop where smarter AI designs even smarter AI. "Even if current rates of AI progress slow by a factor of 100x compared to current trends, total cognitive research labour will still grow far more rapidly than before," Bentham's Bulldog asserts. This framing is crucial because it inoculates the argument against skeptics who point to a single bottleneck, like hardware limits. The author notes that even with conservative estimates, we are looking at "10 million times" growth in research capabilities over a decade.

Critics might argue that physical constraints—energy, raw materials, and the need for physical labor to build robots—will inevitably cap this growth. The piece acknowledges this, noting that "total economic growth is significantly bottlenecked by physical labor." However, the counter-argument presented is that cognitive AI can simulate physical processes so effectively that the need for physical iteration diminishes, or that AI can optimize the limited physical labor we have to an unprecedented degree.

Even if AI progress slowed down considerably, we'd be on track for explosive growth of a kind never before seen in history.

The Compression of History

The most chilling aspect of the coverage is the timeline. Bentham's Bulldog suggests that the last century's technological leaps—nuclear weapons, bioweapons, the internet—could happen in ten years. "If the last century's technological progress had been compressed into a decade, we'd have gotten nuclear weapons, drones, biological weapons, cyber warfare, and much more in less than a decade." This reframes the risk from a distant future problem to an immediate governance crisis. The stakes are no longer just economic; they are existential.

The author highlights the specific danger of "atomically precise manufacturing," which could allow for the cheap, secret mass-production of dangerous technologies. "One deadly, autonomous, insect-sized drone for each person on Earth could fit inside a single large aircraft hangar," the text notes. This imagery is not hyperbole but a logical extrapolation of current trends. The argument implies that our current international treaties and regulatory bodies, designed for a world of incremental change, will be completely obsolete in the face of such speed.

This section leans heavily on the work of the research organization Forethought, specifically their report "Preparing for the Intelligence Explosion." Bentham's Bulldog praises their focus on non-alignment preparedness, arguing that while much attention is paid to getting AI to want what we want, we are neglecting the practicalities of surviving a world where AI can invent new threats faster than we can legislate against them. "There isn't as much talk about other kinds of AI preparedness," the author laments, calling for a massive influx of talent into this specific niche.

Critics might note that this scenario assumes a continuous, uninterrupted trajectory of progress, ignoring the possibility of sudden societal collapse or regulatory crackdowns that could halt development entirely. Yet, the author's point is that betting on a sudden stop is a dangerous gamble when the cost of being wrong is human extinction.

The Imperative for Preparedness

The commentary concludes by shifting from prediction to prescription. The core thesis is that the window for effective intervention is closing rapidly. "Planning for such an insane world seems important," Bentham's Bulldog writes, urging a reallocation of resources toward organizations that are thinking through these non-alignment challenges. The author suggests that the most effective immediate action is to fund research into interpretability, defensive technologies, and international treaties before the explosion occurs.

The piece ends with a sobering assessment of the odds: "60% odds that at some point in the next few decades, we'll see a century's worth of growth in a single decade." This is not a prediction of doom, but a call to action based on the probability of a world that moves faster than our institutions can adapt. The argument is that we are currently building a rocket ship without a flight plan, and the fuel is burning faster than we can measure.

If we're creating superintelligence, this will have a profoundly transformative impact on every aspect of our world, and it's best to prepare.

Bottom Line

Bentham's Bulldog's strongest move is shifting the debate from "will AI be safe?" to "are we ready for the speed of change?" by grounding the argument in robust, compounding data trends rather than speculative fear. The piece's greatest vulnerability is its reliance on the assumption that physical bottlenecks can be bypassed by software alone, a point that remains contested by material scientists. Readers should watch for how global policy bodies respond to these specific metrics of exponential growth, as the gap between technological capability and regulatory capacity is the single most dangerous variable in the coming decade.

Deep Dives

Explore these related deep dives:

  • Technological singularity

    The article's core thesis about an 'intelligence explosion' where AI capabilities grow exponentially is directly describing the technological singularity concept. Understanding the history of this idea from von Neumann through Vinge and Kurzweil provides essential context for the Forethought report's predictions.

  • I. J. Good

    I.J. Good, a British mathematician who worked with Alan Turing, originated the concept of an 'intelligence explosion' in 1965 - the exact term used throughout this article. His paper 'Speculations Concerning the First Ultraintelligent Machine' is foundational to the ideas being discussed.

  • Moore's law

    The article's discussion of compute growing 4.5x per year and algorithmic efficiency improvements directly parallels the history of Moore's law. Understanding how semiconductor scaling worked (and its eventual slowdown) provides crucial context for evaluating claims about AI capability growth trajectories.

Sources

The next decades will plausibly be completely insane

by Bentham's Bulldog · · Read full article

1 Forethought.

You hear a lot of talk about AI alignment, the process of getting AI’s aims to line up with our own. This makes sense given that we’re currently in the process of building AIs that could be vastly smarter than people, and we don’t yet have a great plan for getting them to do what we want. Surprisingly, there isn’t as much talk about other kinds of AI preparedness. If we’re creating superintelligence, this will have a profoundly transformative impact on every aspect of our world, and it’s best to prepare.

Enter Forethought. They’re a research organization trying to figure out how to navigate a world of profoundly transformative AI. They have some extremely impressive people on the team, including Will MacAskill (one of the cofounders of effective altruism), Tom Davidson (formerly a senior research fellow at Open Philanthropy), and many others. Hopefully they will also hire me!

A while ago, Forethought published a report called Preparing for the Intelligence Explosion. I thought the report was thoughtful, cogent, and persuasive. Its core thesis is probably right, and it has huge implications if it is right. For this reason, it seemed like a good idea to summarize it. That’s what I’ll do here. I won’t say, every sentence, “the Forethought team says,” but just know, I’m primarily summarizing what they say, not giving my own thoughts.

Their thesis in a nutshell: AI is likely to radically transform the world, prompting staggeringly fast economic growth, many new technologies, and a number of unprecedented challenges. And the world isn’t ready.

2 The intelligence explosion.

TLDR: AI is advancing very rapidly and shows no sign of stopping. Among other things, the number of AI models we can run is increasing 25x per year, 500X as fast as growth in number of human researchers. If this continues, we will get extremely rapid growth in research abilities, which will prompt unprecedented innovation—potentially hundreds of years of growth in a few years even on conservative assumptions that predict pretty dramatic slowdowns in AI capabilities growth. Advancements in research capabilities will lead to both a technological and industrial explosion.

The fine print…

Every year, human research capacity grows about 5% per year. In contrast the number of AI researchers we can create has been growing 25x per year, over 500 times as quickly. Research is the primary bottleneck to long-term economic growth. These trends are on ...