← Back to Library

You Are Being Told Contradictory Things About AI

{"pitch": "The AI industry is feeding the public a series of contradictory narratives about artificial intelligence that cannot all be true. Some of the most respected researchers in the field disagree fundamentally about whether AGI is imminent or whether current approaches will hit a wall. A new MIT study suggests only 12% of work tasks can currently be automated, while others claim AI systems will handle most white-collar work within three years. This piece examines these competing claims and reveals why the contradictions themselves are the most important story.", "body": {"section1": "The White Collar Job Apocalypse Narrative", "content": "One of the loudest narratives circulating right now is that AI will decimate white collar employment within a few years. Anthropic co-founder Jared Kaplan recently stated that AI systems will be capable of doing most white collar work in two to three years. CNBC cited an MIT study finding that AI can already replace almost 12% of the US workforce.

But digging into the actual data reveals something different. The 11.7% figure represents the dollar value of tasks that current AI models can replicate, not actual job displacement outcomes. The paper makes clear that real workforce impacts depend on company strategies, worker adaptation, and policy choices. Many companies might prefer to keep workers even if only 12% of their labor can be automated, potentially leading to above-inflation wage growth instead of mass layoffs.

Critics might note that Kaplan's three-year timeline is one person's opinion, while the MIT study specifically measures task capability rather than job loss."} {"section2": "The AGI Timeline Debate", "content": "Perhaps no contradiction is more fundamental than the question of whether scaling current architectures will get us to artificial general intelligence. Dario Amade, Anthropic's founder, recently stated that scaling is going to get us there, with occasional small modifications happening in labs.

But Ilia Sutska, formerly OpenAI's chief AI scientist, has said almost exactly the opposite. In recent weeks, Sutska stated that current approaches "will go some distance and then peter out" — continuing to improve but not reaching AGI. On super intelligence, he added: "We are talking about systems that don't exist, that we don't know how to build."

The debate centers on whether models can generalize from existing data to unseen data. Researchers acknowledge we roughly know how well models perform now, but we don't know how they'll perform at larger scales. If models get better at generalizing, they might generate their own synthetic data and solve the problem. If generalization rates stay constant without architectural breakthroughs, we may be in for a long haul."} {"section3": "The Compute Bottleneck Controversy", "content": "A newly published paper from researchers at MIT presents another competing narrative. Their chart shows that between 2022 and 2026, the duration of tasks AI can complete with at least 50% reliability has risen exponentially, coinciding with exponential increases in compute power.

The problem is that OpenAI's compute spend increases rapidly through around 2028, but beyond that point, the increase in compute availability can no longer be described as exponential. Using formal derivations of the relationship between compute growth and time horizon, this implied slowdown might cause the time horizon trend to peter out around 2028.

This creates a pick-your-narrative situation: Are we facing an imminent recursive self-improvement loop? Or are we painfully dependent on such loops for additional progress beyond 2028?

A counterargument worth considering: This analysis only covers OpenAI as a leading indicator, and other companies or approaches may differ significantly."} {"section4": "OpenAI's Code Red", "content": "The narrative around OpenAI has taken another contradictory turn. According to reports, ChatGPT usage has dipped slightly in recent weeks, bringing forward the company's plan to release a new model sooner than planned. The implication is that OpenAI needs more compute for the new model rather than for advertising or other products.

One obvious narrative is that ChatGPT is overrated and OpenAI is dying. But the company plans to ship a new reasoning model next week, said to be ahead of Google's Gemini 3. It will also minimize over-refusals, entertaining scenarios it might not have otherwise.

Anthropic has released Claude Opus 4.5, which costs three times less by API but beats the previous version. Testing in coding environments shows it performs better than Gemini 3 Pro for software engineering tasks."} {"section5": "The Usage Plateau Paradox", "content": "Despite obvious gains in AI capabilities, usage of generative AI by Americans is actually plateauing. Stanford University found that in September, 37% of Americans use generative AI at work, down from 46% in June. A Federal Reserve Bank of St. Louis tracker revealed that in August last year, 12.1% of working-age adults used GenAI daily at work. A year later, only 12.6% did.

This is difficult to explain despite the title of this channel. Researchers personally use AI far more than previous years, yet adoption among ordinary Americans has flatlined."} {"section6": "The Model Performance Contradictions", "content": "Gemini 3 Deep Think was released recently and showed clear improvement on questions that Gemini 3 Pro got wrong. The system attempts each question multiple times in parallel and picks the best response using more tokens to think about problems.

Deepseek V3.2 Speciale scored around 53%, impressive for an open model, comparable to GPT 5.1 on high settings. However, Mistral Large 3 released on December 2nd scored just 20.4%, lower than their version 18 months earlier which scored 22.5%.

This creates the contradictory narrative: Some models are improving dramatically while others are regressing, and it's not clear why."} {"pull_quote": "We are talking about systems that don't exist, that we don't know how to build.", "source": "Ilia Sutska", "context": "Former Chief AI Scientist at OpenAI"} {"bottom_line": "The strongest thread running through all these contradictions is the fundamental uncertainty about whether current approaches will get us to AGI or hit a wall. The compute bottleneck thesis suggests we may need recursive self-improvement to keep progress coming, while others say we're building systems that don't yet exist. The biggest vulnerability is that no one knows which narrative is correct — and the industry itself seems to be contradicting every claim it makes. Watch for the data center construction maps from Epoch AI to see if compute slowdowns actually arrive around 2028."}}

I hope that it might be useful to highlight a few of the myriad contradictory narratives that we are being fed about AI, including a handful from just the last couple of days. For me, the best position to be in is to at least be aware of each perspective and not oblivious to any of them. From talk of a white collar job apocalypse to scaling law paradoxes, today's newly accessible Gemini 3 deep think, OpenAI's contradictory code red Claude soul and a Deep Seek special and more. As always, it's never about the headlines, it's about the detail.

So, let's start with that talk of an AI white collar job apocalypse. A couple of days ago, one of the co-founders of Anthropic, Jared Kaplan, said that AI systems will be capable of doing most white collar work in 2 to 3 years. That's just one guy's opinion. But according to CNBC, at least, there was an MIT study that found that AI can already replace almost 12% of the US workforce.

If those are the headlines and one of the narratives you are being fed, what's the actual data from the study itself? Well, if you dig into it, you find that they're not talking about job losses. The 11.7% represents the dollar value of the tasks that the paper thinks that current AI models can replicate, not in other words, the displacement outcomes, not how many total jobs could be replaced. The paper really tries to make clear that actual workforce impacts in terms of job losses depend on company strategies, worker adaptation, and policy choices.

While many companies may want to get rid of workers if they can, if only 12% of their labor can be automated currently, there is the chance of another outcome, which is above inflation wage growth. The next narrative is that we know how to get to artificial general intelligence. Just scale up our current architectures. More data, more parameters, more computing power.

Here's Dario Ammedday, the founder of Anthropic, speaking yesterday. >> Um, one quick AGI question. It's a science question which is do you think just the way transformers work today and just compute power alone from a scalability sense that that is what will get to AGI or do you think there's some other ingredient and maybe there's a technical question but I'm trying to keep it very ...