← Back to Library

Why I don’t think AGI is right around the corner

I've had a lot of discussions on my podcast where we haggle out our timelines to hi. Some guests think it's 20 years away, others 2 years. Here's where my thoughts lie. As of July 2025, sometimes people say that even if all AI progress totally stopped, the systems of today would still be far more economically transformative than the internet.

I disagree. I think that the LLMs of today are magical, but the reason that the Fortune 500 aren't using them to totally transform their workflows isn't because the management there is too stodgy. Rather, I think it's genuinely hard to get normal humanlike labor out of these LLMs. And this has to do with some fundamental capabilities that these models lack.

Now, I like to think that I'm AI forward here at the Thor podcast. I probably spent on the order of 100 hours trying to build these little LLM tools for my post-production setup. and the experience of trying to get them to be useful has extended my timelines. I'll try to get an LLM to rewrite autogenerated transcripts for me to optimize for readability in the way that a human would be able to rewrite them or I'll try to get them to identify clips from a transcript that I feed in.

Sometimes I'll try to get them to co-write an essay with me passage by passage. Now, these are simple self-contained short horizon language in language out tasks. the kinds of assignments that should be death center in the LLM's repertoire. And there are five out of 10 of them.

Now, don't get me wrong, that is impressive, but the fundamental problem is that LLMs don't get better over time the way a human would. This lack of continual learning is a huge, huge bottleneck. The LLM baseline at many tasks might be higher than the average humans, but there's no way to give a model highle feedback. you're stuck with the abilities you get out of the box.

You can keep messing around with a system prompt, but in practice, this just doesn't produce anything close to the kind of learning and improvement that human employees experience. The reason humans are so useful is not mainly their raw intellect. It's their ability to build up context, to interrogate their own failures, and to pick up small improvements and efficiencies as they practice a task. ...

Watch on YouTube →

Watch the full video by Dwarkesh Patel on YouTube.