← Back to Library

You Are Being Told Contradictory Things About AI

I hope that it might be useful to highlight a few of the myriad contradictory narratives that we are being fed about AI, including a handful from just the last couple of days. For me, the best position to be in is to at least be aware of each perspective and not oblivious to any of them. From talk of a white collar job apocalypse to scaling law paradoxes, today's newly accessible Gemini 3 deep think, OpenAI's contradictory code red Claude soul and a Deep Seek special and more. As always, it's never about the headlines, it's about the detail.

So, let's start with that talk of an AI white collar job apocalypse. A couple of days ago, one of the co-founders of Anthropic, Jared Kaplan, said that AI systems will be capable of doing most white collar work in 2 to 3 years. That's just one guy's opinion. But according to CNBC, at least, there was an MIT study that found that AI can already replace almost 12% of the US workforce.

If those are the headlines and one of the narratives you are being fed, what's the actual data from the study itself? Well, if you dig into it, you find that they're not talking about job losses. The 11.7% represents the dollar value of the tasks that the paper thinks that current AI models can replicate, not in other words, the displacement outcomes, not how many total jobs could be replaced. The paper really tries to make clear that actual workforce impacts in terms of job losses depend on company strategies, worker adaptation, and policy choices.

While many companies may want to get rid of workers if they can, if only 12% of their labor can be automated currently, there is the chance of another outcome, which is above inflation wage growth. The next narrative is that we know how to get to artificial general intelligence. Just scale up our current architectures. More data, more parameters, more computing power.

Here's Dario Ammedday, the founder of Anthropic, speaking yesterday. >> Um, one quick AGI question. It's a science question which is do you think just the way transformers work today and just compute power alone from a scalability sense that that is what will get to AGI or do you think there's some other ingredient and maybe there's a technical question but I'm trying to keep it very ...

Watch on YouTube →

Watch the full video by AI Explained on YouTube.