Most commentary on artificial intelligence fixates on whether machines will become sentient or how quickly they will take our jobs. Bentham's Bulldog, however, argues we are missing the forest for the trees: the real danger isn't just that AI will be smart, but that the world will be utterly unprepared for the sheer velocity of the change. The piece posits a scenario where we compress three centuries of technological evolution into a single decade, creating a reality that is "completely insane" by any historical standard.
The Mathematics of the Explosion
Bentham's Bulldog anchors the argument in a startling statistical reality: while human research capacity grows at a modest 5% annually, the number of AI models we can deploy is expanding 25 times per year. "The number of AI models we can run is increasing 25x per year, 500X as fast as growth in number of human researchers," the author writes. This isn't just a faster car; it's a different mode of transportation entirely. The logic here is compelling because it relies on multiple, independent growth vectors—compute power, algorithmic efficiency, and inference costs—all trending upward simultaneously. Even if one trend stalls, the others carry the momentum.
The author draws a parallel to the concept of the "intelligence explosion" first formalized by I. J. Good in 1965, suggesting that once AI reaches human parity, it won't stop. Instead, it will enter a feedback loop where smarter AI designs even smarter AI. "Even if current rates of AI progress slow by a factor of 100x compared to current trends, total cognitive research labour will still grow far more rapidly than before," Bentham's Bulldog asserts. This framing is crucial because it inoculates the argument against skeptics who point to a single bottleneck, like hardware limits. The author notes that even with conservative estimates, we are looking at "10 million times" growth in research capabilities over a decade.
Critics might argue that physical constraints—energy, raw materials, and the need for physical labor to build robots—will inevitably cap this growth. The piece acknowledges this, noting that "total economic growth is significantly bottlenecked by physical labor." However, the counter-argument presented is that cognitive AI can simulate physical processes so effectively that the need for physical iteration diminishes, or that AI can optimize the limited physical labor we have to an unprecedented degree.
Even if AI progress slowed down considerably, we'd be on track for explosive growth of a kind never before seen in history.
The Compression of History
The most chilling aspect of the coverage is the timeline. Bentham's Bulldog suggests that the last century's technological leaps—nuclear weapons, bioweapons, the internet—could happen in ten years. "If the last century's technological progress had been compressed into a decade, we'd have gotten nuclear weapons, drones, biological weapons, cyber warfare, and much more in less than a decade." This reframes the risk from a distant future problem to an immediate governance crisis. The stakes are no longer just economic; they are existential.
The author highlights the specific danger of "atomically precise manufacturing," which could allow for the cheap, secret mass-production of dangerous technologies. "One deadly, autonomous, insect-sized drone for each person on Earth could fit inside a single large aircraft hangar," the text notes. This imagery is not hyperbole but a logical extrapolation of current trends. The argument implies that our current international treaties and regulatory bodies, designed for a world of incremental change, will be completely obsolete in the face of such speed.
This section leans heavily on the work of the research organization Forethought, specifically their report "Preparing for the Intelligence Explosion." Bentham's Bulldog praises their focus on non-alignment preparedness, arguing that while much attention is paid to getting AI to want what we want, we are neglecting the practicalities of surviving a world where AI can invent new threats faster than we can legislate against them. "There isn't as much talk about other kinds of AI preparedness," the author laments, calling for a massive influx of talent into this specific niche.
Critics might note that this scenario assumes a continuous, uninterrupted trajectory of progress, ignoring the possibility of sudden societal collapse or regulatory crackdowns that could halt development entirely. Yet, the author's point is that betting on a sudden stop is a dangerous gamble when the cost of being wrong is human extinction.
The Imperative for Preparedness
The commentary concludes by shifting from prediction to prescription. The core thesis is that the window for effective intervention is closing rapidly. "Planning for such an insane world seems important," Bentham's Bulldog writes, urging a reallocation of resources toward organizations that are thinking through these non-alignment challenges. The author suggests that the most effective immediate action is to fund research into interpretability, defensive technologies, and international treaties before the explosion occurs.
The piece ends with a sobering assessment of the odds: "60% odds that at some point in the next few decades, we'll see a century's worth of growth in a single decade." This is not a prediction of doom, but a call to action based on the probability of a world that moves faster than our institutions can adapt. The argument is that we are currently building a rocket ship without a flight plan, and the fuel is burning faster than we can measure.
If we're creating superintelligence, this will have a profoundly transformative impact on every aspect of our world, and it's best to prepare.
Bottom Line
Bentham's Bulldog's strongest move is shifting the debate from "will AI be safe?" to "are we ready for the speed of change?" by grounding the argument in robust, compounding data trends rather than speculative fear. The piece's greatest vulnerability is its reliance on the assumption that physical bottlenecks can be bypassed by software alone, a point that remains contested by material scientists. Readers should watch for how global policy bodies respond to these specific metrics of exponential growth, as the gap between technological capability and regulatory capacity is the single most dangerous variable in the coming decade.