← Back to Library

Dario Amodei — “We are near the end of the exponential”

Dario Amodei makes a provocative claim: we are nearing the end of the exponential in artificial intelligence development. The leading AI researcher at Anthropic argues that despite what most people think, the technology is advancing faster than anyone realizes — and most of the public hasn't noticed.

The End Is Nearer Than We Think

Three years ago, Amodei spoke about the scaling hypothesis. Now he's making an even bolder assertion: the end of exponential growth is closer than public awareness suggests.

"The most surprising thing has been the lack of public recognition of how close we are to the end of the exponential," Amodei says. "It is absolutely wild that people are still talking about the same tired political issues when we're near the end of the exponential."

He describes what's happened over the past three years: AI models have progressed from smart high school students to smart college students, then to beginning PhD-level work and professional capabilities. Code has gone beyond that. The progression has been roughly what he expected — plus or minus a year or two here and there.

What Drives the Progress

Amodei still holds to the same hypothesis he laid out in 2017: only seven factors really matter for AI development.

The most important are raw compute, quantity of data, and quality distribution of data. Training duration matters. The objective function — whether pre-training or reinforcement learning — must be able to scale to the moon. Finally, normalization and conditioning ensure numerical stability so that large amounts of compute flow in a linear way instead of running into problems.

Pre-training scaling has continued delivering gains. But something new has emerged: reinforcement learning is now showing the same scaling patterns that pre-training showed.

"We're seeing the same scaling in RL that we saw for pre-training," Amodei explains. Companies have published data showing training on math contests, where performance scales logarithmically with training time — and it's not just math contests but a wide variety of reinforcement learning tasks.

The Puzzle With Human Learning

A genuine puzzle exists in how AI learning differs from human learning. Pre-training uses trillions of tokens while humans never see that many words. Yet once models have a long context length, they're very good at learning and adapting within that context.

Amodei suggests we should think of pre-training and reinforcement learning as something between human evolution and on-the-spot learning:

"The models start from scratch. They have to get much more training. But also our brain isn't just a blank slate — it starts with all these regions connected to inputs and outputs."

Language models, he argues, are much more blank slates than human brains. They're somewhere between human evolution and human learning within a lifetime.

This might explain why companies are building RL environments teaching models how to use APIs, navigate web browsers, and use tools like Slack — it's not about covering every specific skill but about achieving generalization across many tasks.

Why One Year, Not Ten?

Amodei acknowledges the timeline question is contentious. Some believe progress has been steady since 2012 and AGI will arrive around 2035 with a human-like agent. Others see different trajectories.

"When I first saw the scaling back in 2019, I thought this was much more likely than anyone thinks it is," he says. "This is wild. No one else would even consider this. Maybe there's a 50% chance this happens within ten years — but I'm at like 90% on getting to what I call kind of country of geniuses in a data center."

The irreducible uncertainty makes it hard to go much higher than 90 percent. The world is unpredictable.

"It is absolutely wild that people are still talking about the same tired political issues when we're near the end of the exponential."

Critics might note that predicting precise timelines for AI development has been notoriously unreliable — many experts have been wrong about both speed and capability overestimates in the past. The history of the field shows that extrapolations from current trends often fail to account for fundamental barriers that only become visible later.

Bottom Line

Amodei's strongest insight is that public awareness hasn't matched the pace of AI advancement — a surprising gap he finds "absolutely wild." His vulnerability lies in the timeline: predicting whether it's one year or ten years from now is inherently uncertain. What readers should watch for is whether reinforcement learning continues scaling the same way pre-training did, and whether the seven factors Amodei identified remain the primary drivers of progress.

So we talked three years ago. I'm curious in your view, what has been the biggest update of the last three years? What has been the biggest difference between what it felt like last three years versus now? >> Yeah, I would say actually the underlying technology like the exponential of the technology has has gone broadly speaking I would say about about as I expected it to go.

I mean there's like plus or minus you know a couple there's plus or minus a year or two here. There's plus or minus a year or two there. I don't know that I would have predicted the specific direction of code. Um but but actually when I look at the exponential it it is roughly what I expected in terms of the march of the models from like you know smart high school student to smart college student to like you know beginning to do PhD and professional stuff and in the case of code reaching beyond that.

So you know the frontier is a little bit uneven. It's roughly what I expected. I will tell you though what the most surprising thing has been. The most surprising thing has been the lack of public recognition of how close we are to the end of the exponential.

To me, it is absolutely wild that, you know, you have people, you know, within the bubble and outside the bubble, you know, but but you have people talking about these these, you know, just the same tired old hot button political issues and like, you know, around us. We're like near the end of the exponential. I I want to understand what that exponential looks like right now because the first question I asked you when we recorded three years ago was, you know, what's up with scaling? How why does it work?

Um I have a similar question now but I feel like it's a more complicated question because at least from the public's point of view. >> Yes. >> Three years ago there were these you know well-known public trends where across many orders of magnitude of compute you could see how the loss improves and now we have RL scaling and there's no publicly known scaling law for it. It's not even clear what exactly the story is of is this supposed to be teaching the model skills is ...