Most forecasts about artificial intelligence treat the future as a slow, linear slide. Bentham's Bulldog argues we are standing at the precipice of a vertical cliff, predicting an intelligence explosion where economic growth could exceed 100% annually. This piece is notable not for its optimism, but for its cold, mathematical refusal to accept that current trends will simply fade away. For the busy executive trying to allocate capital or policy resources, the central question isn't whether AI will change the world, but whether the change will happen so fast that our institutions cannot react.
The Math of the Explosion
The author builds their case on a compounding arithmetic that is difficult to dismiss. Bentham's Bulldog writes, "Training compute has been growing about 5x per year... Algorithmic efficiency in training has been growing 3x per year... Combined, these lead to an effective 15x increase in training compute." When layered with post-training enhancements and dropping inference costs, the author calculates a staggering 45x annual increase in effective capability. This isn't just incremental improvement; it is a trajectory that dwarfs the industrial revolution.
The core of the argument rests on the divergence between human and machine research effort. As Bentham's Bulldog puts it, "Total AI cognitive labour is growing more than 500x faster than total human cognitive labour." This statistic is the engine of the entire piece. It suggests that once AI reaches a certain threshold, the feedback loop of machines improving machines will render human oversight negligible. The author cites the PREPIE report to reinforce this, noting that "even if current rates of AI progress slow by a factor of 100x compared to current trends, total cognitive research labour... will still grow far more rapidly than before." This framing is effective because it removes the need for perfect prediction; even a massive slowdown in the current trajectory still leads to the same explosive outcome.
Critics might argue that physical constraints—like the availability of chips or energy—will inevitably cap this growth. However, the author anticipates this by pointing out that automating the design of those very chips could accelerate progress further, creating a self-correcting loop that bypasses traditional bottlenecks.
"Once AI reaches roughly a human level, the enormous number of AI models we can run will be able to automate away a huge amount of research."
The Fallacy of Human Oversight
A common objection to the intelligence explosion is that AI, no matter how smart, will always require human direction for complex, long-term planning. Bentham's Bulldog dismantles this by reframing the timeline. The author argues that the bottleneck is not the AI's ability to think, but its ability to execute long-horizon tasks. "My claim is that an intelligence explosion is fairly plausible, not guaranteed," they concede, but immediately pivot to the speed of recent breakthroughs in agency.
The piece highlights that models are already moving beyond simple chatbots. Bentham's Bulldog notes that OpenAI's Deep Research can "synthesize (and reason about) existing literature and produce detailed reports in between five and thirty minutes," scoring significantly higher than previous iterations on complex exams. The author suggests that once these agents can handle tasks taking weeks or months, the need for human oversight evaporates. "Once the AI can do very long tasks, it will be able to—almost at will—produce extended research reports about promising research directions." This is a crucial pivot: the author posits that the ability to complete long tasks solves the research taste problem by default, allowing the AI to iterate on its own capabilities without waiting for human approval.
This argument gains weight when viewed through the lens of historical hyperbolic growth. Just as Sara Hooker has detailed regarding the non-linear scaling of neural networks, the author implies that we are entering a phase where the rate of improvement is no longer bound by human cognitive limits. The emergence of agents capable of autonomous operation, like Anthropic's computer use feature, signals that the transition from tool to worker is already underway.
Why the Market is Wrong
Perhaps the most provocative section of the commentary is the rejection of the efficient market hypothesis as a predictor of this event. Bentham's Bulldog challenges the idea that stock prices reflect the imminent arrival of transformative AI. "If markets are inefficient, you can bet on that and make money," they write, but then flip the script: "You succeed as a trader by accurately pricing normal companies in the market, but those skills don't necessarily carry over to accurately predicting an intelligence explosion."
The author draws a sharp analogy to religious belief to illustrate the flaw in relying on market consensus for one-off, existential events. They argue that markets suffer from a "normalcy bias," underestimating exponential trends that have already upended industries like solar energy. "People are wrong sometimes, even when it pays to be right," Bentham's Bulldog asserts, suggesting that the financial sector is ill-equipped to price in a scenario where GDP doubles every few years. This is a compelling critique of the financial establishment's short-termism.
However, a counterargument worth considering is that the market may be pricing in a different kind of explosion—one of disruption and failure rather than smooth, hyper-growth. If the transition is chaotic, the market's skepticism might be a rational hedge against volatility rather than a denial of the technology's potential.
Bottom Line
Bentham's Bulldog presents a rigorous, data-driven case that the window for preparing for an intelligence explosion is closing faster than institutional memory suggests. The strongest part of the argument is the mathematical inevitability of the 500x growth rate in cognitive labor, which makes the "slow takeoff" scenario increasingly difficult to defend. The biggest vulnerability lies in the assumption that hardware and energy constraints can be solved as quickly as software capabilities are advancing. Readers should watch for the next generation of autonomous agents; if they successfully complete complex, multi-week research projects without human intervention, the exponential curve will likely become a vertical line.
"The trends going into it have been persistent for a while and show no sign of stopping. But if they don't stop, then an intelligence explosion is imminent."