Apple’s ‘AI Can’t Reason’ Claim Seen By 13M+, What You Need to Know
Almost no one has the time to investigate headlines like this one seen by tens of millions of people that AI models don't actually reason at all. They just memorize patterns. AGI is mostly hype and even the underlying Apple paper quoted says it's an illusion of thinking. This was picked up in mainstream outlets like the Guardian which quoted it as being a pretty devastating Apple paper.
So, what are people supposed to believe when half the headlines are about an imminent AI job apocalypse and the other half are about LLMs all being fake? Well, hopefully you'll find that I'm not trying to sell a narrative. I'll just say what I found having read the 30page paper in full and the surrounding analyses. I'll also end with a recommendation on which model you should use and yes, touch on the brand new 03 Pro from OpenAI.
Although I would say that the $200 price per month to access that model is not for the unwashed masses like you guys. Some very quick context on why a post like this one gets tens of millions of views and coverage in the mainstream media. And no, it's not just because of the unnecessarily frantic breaking at the start. It's also because people hear the claims made by the CEOs of these AI labs like Sam Orman yesterday posting, "Humanity is close to building digital super intelligence.
We're past the event horizon. The takeoff has started. While the definitions of those terms are deliberately vague, you can understand people paying attention. People can see for themselves how quickly large language models are improving and they can read the headlines generated by the CEO of Anthropic saying there is a white collar blood bath coming.
It's almost every week now that we get headlines like this one in the New York Times. So, it's no wonder people are paying attention. Now, some would say cynically that Apple seemed to be producing more papers quote debunking AI than actually improving AI. But let's set that cynicism aside.
The paper essentially claimed that large language models don't follow explicit algorithms and struggle with puzzles when there are sufficient degrees of complexity. Puzzles like the Tower of Hanoi challenge where you've got to move a tower of discs from one place to another, but never place a larger disc on top a smaller one. They ...
Watch the full video by AI Explained on YouTube.