← Back to Library

What are we scaling?

The conversation about AI timelines has become strangely disconnected from what labs are actually building.

Dwarkesh Patel, an AI researcher, sees a fundamental tension that doesn't get enough attention: some people believe we're approaching human-level learning within years, yet they're simultaneously bullish on scaling up reinforcement learning at the very same time. If we truly are close to creating a learner that resembles humans—capable of learning from experience and generalizing across contexts—then this entire approach of training models on verifiable outcomes looks increasingly fragile.

What are we scaling?

The labs aren't just waiting for this future. They're actively preparing for it by baking skills into current models through what's been called "mid-training." There's an entire supply chain of companies building RL environments that teach models how to navigate web browsers, use Excel for financial modeling, and other tasks. Either these models will soon learn on the job in a self-directed way—which would make all this pre-baking pointless—or they won't, which means AGI isn't imminent.

The robotics example illustrates this tension perfectly. In some fundamental sense, robotics is an algorithms problem, not a hardware or data problem. With very little training, a human can learn how to tell or operate current hardware to do useful work. So if you actually had a human-like learner, robotics would be largely a solved problem. But because we don't have such a learner, it's necessary to go into thousands of different homes and practice millions of times on how to pick up dishes or fold laundry.

If models could learn like humans, they'd diffuse incredibly quickly—easier to integrate than normal employees, able to read your entire Slack and immediately distill skills from other AI employees.

The counterargument runs that we need all this cludgy RL in service of building a superhuman AI researcher. The idea is that millions of automated researchers will figure out how to solve robust and efficient learning from experience. This has the flavor of an old joke: we're losing money on every sale, but we'll make it up in volume. Somehow, this automated researcher would solve the AGI problem—a challenge humans have been working on for half a century—while not having the basic learning capabilities that children possess.

Even if you believe this, it doesn't describe how labs are actually approaching reinforcement learning from verifiable reward. You don't need to pre-bake consulting skills like crafting PowerPoint slides in order to automate research. So the lab's actions hint at a worldview where these models will continue to fare poorly at generalization and on-the-job learning—making it necessary to build in economically useful skills beforehand.

Another counterargument is efficiency: even if models could learn these skills on the job, it's so much more efficient to build them in once during training rather than repeatedly for each user and company. And indeed, there's a clear advantage to baking in common tools like browsers and terminals. But people are really underestimating how much company and context-specific skills are required to do most jobs. There isn't currently a robust way for AI systems to pick up these skills.

At a recent dinner with an AI researcher and a biologist, the conversation revealed something interesting. The biologist had long timelines, which prompted questions about why she held them. One part of her work involved looking at slides and deciding if the dot in that slide is actually a macrophage or just looks like one. The AI researcher responded that image classification is a textbook deep learning problem—"death center" for what we could train these models to do.

This exchange illustrated a key crux between those expecting transformative economic impact within five years and those with longer timelines. Human workers are valuable precisely because we don't need to build in specialized training loops for every single small part of their job. It's not net productive to build a custom training pipeline to identify what macrophages look like given the specific way that one lab prepares slides, then another training loop for the next lab-specific microtask, and so on.

What you actually need is an AI that can learn from semantic feedback or self-directed experience and generalize the way a human does. Every day involves a hundred things requiring judgment, situational awareness, and skills learned on the job—tasks that differ not just across different people but even from one day to the next for the same person.

It's not possible to automate even a single job by just baking in a predefined set of skills, let alone all the jobs.

People are really underestimating how big a deal actual AI will be because they're just imagining more of this current regime. They're not thinking about billions of human-like intelligences on a server that can copy and merge all their learnings. The author expects actual brain-like intelligences within the next decade or two—which is pretty crazy.

Sometimes people say that the reason AIs are more widely deployed across firms already providing value outside of coding is that technology takes a long time to diffuse. The author thinks this is cope. People are using this line to gloss over the fact that these models just lack the capabilities necessary for broad economic value. If these models actually were like humans on a server, they'd diffuse incredibly quickly—much easier to integrate and onboard than normal human employees. They could read your entire Slack in minutes. They'd immediately distill all the skills that other AI employees have.

The hiring market for humans is very much like a lemons market where it's hard to tell who good people are beforehand—and hiring somebody who turns out to be bad is very costly. This isn't a dynamic you'd have to worry about if you were just spinning up another instance of a vetted AI model.

For these reasons, the author expects it'll be much easier to diffuse AI labor into firms than to hire a person. Companies hire people all the time. If capabilities were actually at AGI level, people would be willing to spend trillions of dollars a year buying tokens that these models produce. Knowledge workers across the world cumulatively earn tens of trillions of dollars a year in wages. And the reason labs are orders of magnitude off this figure right now is that the models are nowhere near as capable as human knowledge workers.

Now you might say: how can the standard be that labs have to earn tens of trillions of dollars of revenue a year? Until recently, people were saying these models can't reason, don't have common sense, are just doing pattern recognition. Obviously AI bulls are right to criticize AI bears for repeatedly moving these goalposts—but this is often fair. It's easy to underestimate the progress that AI has made over the last decade.

But some amount of goalpost shifting is actually justified. If you showed someone Gemini 3 in 2020, the author would have been certain it could automate half of knowledge work. We keep solving what we thought were sufficient bottlenecks to AGI. We have models with general understanding, few-shot learning, reasoning. And yet we still don't have AGI.

The rational response is to look at this and say: actually, there's much more to intelligence and labor than previously realized. While we're really close—and in many ways have surpassed what would have previously defined as AGI—the fact that model companies aren't making the trillions of dollars in revenue that would be implied by AGI clearly reveals that previous definitions were too narrow.

The author expects this to keep happening into the future. By 2030, labs will have made significant progress on continual learning and models will be earning hundreds of billions of dollars in revenue a year—but they won't have automated all knowledge work. We'll say: we made a lot of progress, but we haven't hit AGI yet. We also need other capabilities—X, Y, and Z.

Models keep getting more impressive at the rate that short-timeline people predict, but more useful at the rate that long-timeline people predict. It's worth asking what we're scaling with pre-training.

We had this extremely clean and general trend in improvement in loss across multiple orders of magnitude in compute—a power law as weak as exponential growth is strong. But people are trying to launder the prestige that three training scaling has, which is almost as predictable as a physical law of the universe, to justify bullish predictions about reinforcement learning from verifiable reward—for which we have no publicly known trend.

When intrepid researchers try to piece together implications from scarce public data points, they get pretty bearish results. For example, Toby Board has a great post where he cleverly connects the dots between different O series benchmarks, and this suggested to him that we need something like a million-times scale-up in total RL compute to give a boost similar to a single GPT-level improvement.

People have spent a lot of time talking about the possibility of a software singularity where AI models will write code that generates a smarter successor system, or a hardware-software singularity where AIs also improve their successors' computing hardware. However, all these scenarios neglect what the author thinks will be the main driver of further improvements at top AGI: continual learning.

Again, think about how humans become more capable than anything. It's mostly from experience in the relevant domain. Over conversation, Baron Miller made this interesting suggestion that the future might look like continual learning agents going out and doing different jobs, generating value, then bringing back all their learnings to a hive mind model which does some kind of distillation on all these agents.

The agents themselves could be quite specialized—containing what Karpathy called the cognitive core plus knowledge and skills relevant to the job they're being deployed to do. Solving continual learning won't be a singular one-and-done achievement. Instead, it will feel like solving in-context learning.

GPT-3 already demonstrated in-context learning could be very powerful in 2020. Its in-context learning capabilities were so remarkable—the title of the GPT-3 paper was "Language Models are Few-Shot Learners." But of course, we didn't solve in-context learning when GPT-3 came out. And indeed, there's still plenty of progress that has to be made from comprehension to context length.

The author expects a similar progression with continual learning. Labs will probably release something next year which they call continual learning and which will in fact count as progress towards continual learning. But human-level on-the-job learning may take another five to ten years to iron out.

This is why the author doesn't expect some kind of runaway gains from the first model that cracks continual learning—getting more and more widely deployed and capable. If you had fully solved continual learning, drop out of nowhere, then sure, it might be game set match as Satoshi put it on a podcast when asked about this body disability.

But that's probably not what's going to happen. Instead, some lab is going to figure out how to get some initial traction on this problem and then playing around with this feature will make it clear how it was implemented—and other labs will soon replicate the breakthrough and improve it slightly.

The author just has some prior that competition will stay pretty fierce between all these model companies. And this is informed by the observation that all these previous supposed flywheels—whether that's user engagement on chat or synthetic data or whatever—have done very little to diminish the greater and greater competition between model companies.

Every month or so, the big three model companies will rotate around the podium, and the other competitors are not that far behind. There seems to be some force—and this is potentially talent poaching, potentially the rumor mill of Silicon Valley or just normal reverse engineering—which has so far neutralized any runaway advantage that a single lab might have had.

Bottom Line

The strongest part of this argument is the clear-eyed analysis of what actually differentiates human workers from current AI systems: the ability to learn on the job and generalize across contexts. The biggest vulnerability is the author's own timeline—expecting human-level brain-like intelligences within a decade or two—which even he admits is "pretty crazy." What readers should watch for is whether labs make meaningful progress on continual learning in the next few years, and whether that leads to the kind of diffusion across firms that would actually transform economic value.

Deep Dives

Explore these related deep dives:

Sources

What are we scaling?

by Dwarkesh Patel · Dwarkesh Patel · Watch video

I'm confused why some people have super short timelines yet at the same time are bullish on scaling up reinforcement learning a top LLMs. If we're actually close to a humanlike learner, then this whole approach of training on verifiable outcomes is doomed. Now, currently the labs are trying to bake in a bunch of skills into these models through mid-training. There's an entire supply chain of companies that are building RL environments which teach the model how to navigate a web browser or use Excel to build financial models.

Now either these models will soon learn on the job in a self-directed way which will make all this freebaking pointless or they won't which means that AGI is not imminent. Humans don't have to go through the special training phase where they need to rehearse every single piece of software that they might ever need to use on the job. Baron Millig made an interesting point about this in a recent blog post he wrote. He writes, quote, "When we see frontier models improving at various benchmarks, we should think not just about the increased scale and the clever ML research ideas, but the billions of dollars that are paid to PhDs, MDs, and other experts to write questions and provide example answers and reasoning targeting these precise capabilities.

You can see this tension most vividly in robotics. In some fundamental sense, robotics is an algorithms problem, not a hardware or a data problem. With very little training, a human can learn how to tell or operate current hardware to do useful work. So if you actually had a humanlike learner, robotics would be in large part a solved problem.

But the fact that we don't have such a learner makes it necessary to go out into a thousand different homes and practice a million times on how to pick up dishes or fold laundry. Now, one counter argument I've heard from the people who think we're going to have a takeoff within the next 5 years is that we have to do all this cludgy RL in service of building a superhuman AI researcher. And then the million copies of this automated Ilia can go figure out how to solve robust and efficient learning from experience. This just gives me the vibes of that old joke, we're losing money on every sale, but we'll make it up in ...