Some thoughts on the Sutton interview
Boy, do you guys have a lot of thoughts about the Sun interview. I've been thinking about it myself and I think I have a much better understanding now of Sun's perspective than I did during the interview itself. So, I wanted to reflect on how I understand his worldview now. And Richard, apologies if there's still any errors or misunderstandings.
It's been very productive to learn from your thoughts. Okay. So, here's my understanding of the steelman of Richard's position. Obviously, he wrote the famous essay, the better lesson.
And what is this essay about? Well, it's not saying that you just want to throw away as much compute as you possibly can. The bitter lesson says that you want to come up with techniques which most effectively and scalably leverage compute. Most of the compute that's spent on an LLM is used in running it during deployment.
And yet, it's not learning anything during this entire period. It's only learning during the special phase that we call training. And so, this is obviously not an effective use of compute. And what's even worse is that this training period by itself is highly inefficient because these models are usually trained on the equivalent of tens of thousands of years of human experience.
And what's more, during this training phase, all of their learning is coming straight from human data. Now, this is an obvious point in the case of pre-training data, but it's even kind of true for the RLVR that we do with these LLMs. These RL environments are human furnished playgrounds to teach LLMs the specific skills that we have prescribed for them. The agent is in no substantial way learning from organic and self-directed engagement with the world.
Having to learn only from human data, which is an inelastic and hard tokill resource, is not a scalable way to use compute. Furthermore, what these LLMs learn from training is not a true world model which would tell you how the environment changes in response to different actions that you take. Rather, they are building a model of what a human would say next. And this leads them to rely on human derived concepts.
A way to think about this would be suppose you trained an LLM on all the data up to the year 1900. That LLM probably wouldn't be able to come up with relativity ...
Watch the full video by Dwarkesh Patel on YouTube.