← Back to Library

Acceleration though ai-automated r&d: My chat (+transcript) with researcher tom davidson

James Pethokoukis challenges the prevailing obsession with hardware, arguing that the true engine of an economic explosion lies not in building more factories, but in software that can rewrite itself. While the world fixates on the cost of computer chips, Pethokoukis presents a scenario where the bottleneck vanishes entirely, potentially triggering a feedback loop of innovation that outpaces human history.

The Software Explosion

The central thesis of this piece is a radical departure from the standard narrative of artificial intelligence development. Pethokoukis frames the conversation around Tom Davidson's research, which suggests that we do not need to manufacture new physical infrastructure to achieve superhuman research capabilities. "You don't have to build any more computer chips, you don't have to build any more fabs," Davidson asserts, a claim that immediately reframes the entire industry's capital allocation strategy. The argument posits that the limiting factor is not the silicon, but the efficiency of the code running on it.

Acceleration though ai-automated r&d: My chat (+transcript) with researcher tom davidson

This perspective is compelling because it bypasses the tangible constraints of supply chains and construction timelines. Pethokoukis highlights that current trends already show a massive reduction in the cost of running AI systems, noting that "just in one year, it becomes 10 times to 1000 times cheaper to run the same AI system." If this efficiency gain is applied to the act of research itself, the result is a self-reinforcing cycle where AI improves its own algorithms, creating more AI researchers without any physical expansion.

"If we can manufacture human minds... then that feedback loop gets going again. Because if we can manufacture more human minds, then we can spend output again to create more workers."

The logic here mirrors the historical dynamics of the Industrial Revolution, yet it inverts the demographic requirement. Historically, economic growth was tied to population growth; more people meant more ideas. However, as Davidson points out, "the richer people get, the fewer kids they have," effectively breaking the traditional ideas feedback loop. Pethokoukis argues that automated research offers a way to restart this engine without needing a baby boom, substituting human labor with digital cognition.

Critics might note that this model assumes a linear scalability of intelligence that ignores the law of diminishing returns or the physical limits of energy consumption. While the software can be copied infinitely, the hardware it runs on still requires electricity and cooling, which are not free.

The Gap Between Benchmarks and Reality

A significant portion of the commentary focuses on the timeline for this transformation. Pethokoukis is careful to distinguish between a model that can solve a theoretical puzzle and one that can manage a complex research organization. He notes that while the gap between theory and reality is smaller for software than for physical industries like car manufacturing, it remains substantial. "There's still going to be a pretty big benchmark-to-reality gap, even for OpenAI," Davidson admits, citing the difficulty of creating code that is not just functional but maintainable and well-designed.

This nuance is crucial. It prevents the piece from sliding into pure science fiction by acknowledging that "coding is almost uniquely clean" yet still fraught with the messiness of human intent and architectural integrity. The comparison to the Riemann hypothesis serves as a useful boundary marker; the AI does not need to solve every mathematical problem in existence, but it does need to possess the "intermediate level of generality" to manage the full spectrum of research tasks, from literature review to project management.

Pethokoukis offers a sobering timeline estimate, suggesting that "my current best guess is it's about even odds that we're able to fully automate OpenAI within the next 10 years." This 50-50 probability is a bold claim that challenges the skepticism of mainstream economists who view such rapid acceleration as impossible.

"Imagine that we were sitting there in the year 1400... and then there was some kind of futurist economist rogue that said, 'Actually, I think that if I extrapolate the curves in this way and we get this kind of technology, maybe we could have one percent growth.'"

The analogy to 1400 is effective because it illustrates how entrenched the status quo bias is. Just as pre-industrial economists could not conceive of sustained growth rates higher than 0.1 percent, modern economists struggle to model 20 or 30 percent annual growth. Pethokoukis uses this historical blind spot to validate the radical nature of Davidson's projections, suggesting that the "dominant view" is simply a failure of imagination rather than a reflection of physical laws.

The Security and Economic Implications

The stakes of this transition are framed not just as an economic opportunity, but as a profound national security challenge. If a single entity can deploy a billion superintelligent agents in a matter of months, the concentration of power becomes unprecedented. "It is a threat to national security for any government in which this happens," Davidson warns, highlighting the risk that these systems could be lost to control or used for malicious ends.

The potential for explosive growth is matched by the potential for catastrophic failure. Pethokoukis notes that while some economists dismiss these predictions as "sci-fi," a growing number of senior experts are taking the possibility of "30 percent growth every year" seriously. The speed of the transition is the critical variable; if the feedback loop accelerates too quickly, societal and regulatory structures may be unable to adapt.

"The combined cognitive abilities of all these AIs outstrips the whole of the United States, outstrips anything we've seen from any kind of company or entity before, and they can all potentially be put towards any goal that OpenAI wants to."

This concentration of capability raises the question of governance. If the "software intelligence explosion" occurs, the entity that controls the initial algorithm effectively controls the future of innovation. The piece implies that the race is not just about who builds the best model, but who can safely manage the self-replicating nature of the research process.

Bottom Line

Pethokoukis's commentary succeeds in shifting the focus from the hardware bottleneck to the software potential, presenting a credible, albeit terrifying, path to explosive economic growth. The strongest element of the argument is the historical analogy that exposes the limitations of current economic modeling, while the biggest vulnerability remains the assumption that algorithmic efficiency can scale indefinitely without hitting physical or logical walls. Readers should watch for the next breakthrough in reinforcement learning, as that may be the specific catalyst that turns this theoretical feedback loop into a reality.

Deep Dives

Explore these related deep dives:

Sources

Acceleration though ai-automated r&d: My chat (+transcript) with researcher tom davidson

by James Pethokoukis · Faster, Please! · Read full article

My fellow pro-growth/progress/abundance Up Wingers in America and around the world:

What really gets AI optimists excited isn’t the prospect of automating customer service departments or human resources. Imagine, rather, what might happen to the pace of scientific progress if AI becomes a super research assistant. Tom Davidson’s new paper, How Quick and Big Would a Software Intelligence Explosion Be?, explores that very scenario.

Today on Faster, Please! — The Podcast, I talk with Davidson about what it would mean for automated AI researchers to rapidly improve their own algorithms, thus creating a self-reinforcing loop of innovation. We talk about the economic effects of self-improving AI research and how close we are to that reality.

Davidson is a senior research fellow at Forethought, where he explores AI and explosive growth. He was previously a senior research fellow at Open Philanthropy and a research scientist at the UK government’s AI Security Institute.

In This Episode

Making human minds (1:43)

Theory to reality (6:45)

The world with automated research (10:59)

Considering constraints (16:30)

Worries and what-ifs (19:07)

Below is a lightly edited transcript of our conversation.

Making human minds (1:43).

... you don’t have to build any more computer chips, you don’t have to build any more fabs... In fact, you don’t have to do anything at all in the physical world.

Pethokoukis: A few years ago, you wrote a paper called “Could Advanced AI Drive Explosive Economic Growth?,” which argued that growth could accelerate dramatically if AI would start generating ideas the way human researchers once did. In your view, population growth historically powered kind of an ideas feedback loop. More people meant more researchers meant more ideas, rising incomes, but that loop broke after the demographic transition in the late-19th century but you suggest that AI could restart it: more ideas, more output, more AI, more ideas. Does this new paper in a way build upon that paper? “How quick and big would a software intelligence explosion be?”

The first paper you referred to is about the biggest-picture dynamic of economic growth. As you said, throughout the long run history, when we produced more food, the population increased. That additional output transferred itself into more people, more workers. These days that doesn’t happen. When GDP goes up, that doesn’t mean people have more kids. In fact, the demographic transition, the richer people get, the fewer kids they have. So now ...