Dario Amodei makes a provocative claim: we are nearing the end of the exponential in artificial intelligence development. The leading AI researcher at Anthropic argues that despite what most people think, the technology is advancing faster than anyone realizes — and most of the public hasn't noticed.
The End Is Nearer Than We Think
Three years ago, Amodei spoke about the scaling hypothesis. Now he's making an even bolder assertion: the end of exponential growth is closer than public awareness suggests.
"The most surprising thing has been the lack of public recognition of how close we are to the end of the exponential," Amodei says. "It is absolutely wild that people are still talking about the same tired political issues when we're near the end of the exponential."
He describes what's happened over the past three years: AI models have progressed from smart high school students to smart college students, then to beginning PhD-level work and professional capabilities. Code has gone beyond that. The progression has been roughly what he expected — plus or minus a year or two here and there.
What Drives the Progress
Amodei still holds to the same hypothesis he laid out in 2017: only seven factors really matter for AI development.
The most important are raw compute, quantity of data, and quality distribution of data. Training duration matters. The objective function — whether pre-training or reinforcement learning — must be able to scale to the moon. Finally, normalization and conditioning ensure numerical stability so that large amounts of compute flow in a linear way instead of running into problems.
Pre-training scaling has continued delivering gains. But something new has emerged: reinforcement learning is now showing the same scaling patterns that pre-training showed.
"We're seeing the same scaling in RL that we saw for pre-training," Amodei explains. Companies have published data showing training on math contests, where performance scales logarithmically with training time — and it's not just math contests but a wide variety of reinforcement learning tasks.
The Puzzle With Human Learning
A genuine puzzle exists in how AI learning differs from human learning. Pre-training uses trillions of tokens while humans never see that many words. Yet once models have a long context length, they're very good at learning and adapting within that context.
Amodei suggests we should think of pre-training and reinforcement learning as something between human evolution and on-the-spot learning:
"The models start from scratch. They have to get much more training. But also our brain isn't just a blank slate — it starts with all these regions connected to inputs and outputs."
Language models, he argues, are much more blank slates than human brains. They're somewhere between human evolution and human learning within a lifetime.
This might explain why companies are building RL environments teaching models how to use APIs, navigate web browsers, and use tools like Slack — it's not about covering every specific skill but about achieving generalization across many tasks.
Why One Year, Not Ten?
Amodei acknowledges the timeline question is contentious. Some believe progress has been steady since 2012 and AGI will arrive around 2035 with a human-like agent. Others see different trajectories.
"When I first saw the scaling back in 2019, I thought this was much more likely than anyone thinks it is," he says. "This is wild. No one else would even consider this. Maybe there's a 50% chance this happens within ten years — but I'm at like 90% on getting to what I call kind of country of geniuses in a data center."
The irreducible uncertainty makes it hard to go much higher than 90 percent. The world is unpredictable.
"It is absolutely wild that people are still talking about the same tired political issues when we're near the end of the exponential."
Critics might note that predicting precise timelines for AI development has been notoriously unreliable — many experts have been wrong about both speed and capability overestimates in the past. The history of the field shows that extrapolations from current trends often fail to account for fundamental barriers that only become visible later.
Bottom Line
Amodei's strongest insight is that public awareness hasn't matched the pace of AI advancement — a surprising gap he finds "absolutely wild." His vulnerability lies in the timeline: predicting whether it's one year or ten years from now is inherently uncertain. What readers should watch for is whether reinforcement learning continues scaling the same way pre-training did, and whether the seven factors Amodei identified remain the primary drivers of progress.