Bubble or No Bubble, AI Keeps Progressing (ft. Relentless Learning + Introspection)
For the first year of this channel, 2023, it was striking to me how few were sensing how big an impact language models would have on the world. But then in the second year, I felt that the idea of an imminent singularity and mass job layoffs had become the dominant narrative. And in several of my videos, I tried to show that there was evidence of that being overblown for now. Now, as you might have noticed, the vibe is again reversed with talk of an AI bubble in company valuations being conflated for me with the assertion that we are in a plateau of model progress.
So, this quick video like my last one is again a counternarrative and no, not just one built on hopes for the forthcoming Gemini 3 from Google Deind. No, I would instead ask what for you is missing from language models from being what you imagined AI would be. Personally, I put together some categories a while back and I'm sure you may have others. Some would say, well, they don't learn on the fly or there's no real introspection going on, just regurgitation.
Thing is, AI researchers have got to earn their bread somehow. So, there's always a paper for whatever deficiency you can imagine. I am going to end the video as well with some more visual ways the AI is progressing as yes it seems like Nano Banana 2 from Google may have been spotted in the wild. But first on continual learning or the lack of it aka that inability of the models you speak to like chat GPT to learn about you properly and your specifications and to just grow to organically become GPT 5.5 rather than have to be pre-trained into becoming GPT 5.5.
If AI was all hype you might say well that's definitely going to take at least a decade to solve. But for others like these authors at Google, it's a problem for which there is a ready and benchmarked solution. I will however caveat that by saying that this is a complex paper and despite what the appendix promises, not all the results have actually been released yet. But here is my attempt at a brief summary.
Alas, there are not many pretty diagrams. But essentially the paper shows that there are viable approaches for allowing models to continually learn while retaining ...
Watch the full video by AI Explained on YouTube.