{"content": ["> The key claim is that AI will move from automating individual tasks to automating entire job categories — and this shift could happen in one to two years.
Dario Amodei, co-founder of Anthropic, has been remarkably consistent: transformative AI will arrive within the next year or two, certainly before 2030. That's less than 50 months away. It's a timeline that feels almost surreal when you actually try to wrap your mind around it.
In a nearly 20,000-word essay published in the last 48 hours, Amodei maps out where he sees things going — for better or worse. His previous essay, Machines of Loving Grace, became the preoccupation of Silicon Valley for months. Now he's back with predictions that will dominate conversations through 2026.
From Code Writing to Job Automation
Amodei's first prediction is striking: tools like Claude Code will move from automating individual coding tasks to automating entire job categories like software engineering. He points to law and finance, where integration into Excel helps you complete specific tasks — but he sees AI doing the whole job you're doing.
The engine behind this prediction is what he calls the scaling laws. The idea: AI systems get predictably better at every cognitive skill we can measure. More data and more compute yield a smooth, unyielding increase in capabilities. Ignore the headlines about AI hitting a wall or being a bubble, he argues. Some tools are overhyped in the short run and certain companies may go bust — but the underlying curve is strong, consistent, and predictable.
The key claim: moving from automating individual tasks within a job to automating entire jobs.
Amodei has predicted something like this before. In Machines of Loving Grace, he outlined transformative AI that could be as little as one to two years away. Going back to his October 2024 essay, he predicted it could come as early as 2026. Now, in this new essay, he does it several times without fully acknowledging how his own predictions have shifted.
The evidence he's citing: some of the strongest engineers at Anthropic — and presumably some of the highest-paid — are handing over almost all their coding to AI. Notice that's their coding though, not their entire job.
The difference matters. Using Claude Code daily, its best suggestions are genius that most humans wouldn't have come up with. Its worst ones would destroy almost any app you create.
The Second Extrapolation
But there's a second extrapolation to contend with. Not only will AI move from all coding to all software engineering being done — but from software engineering to all other white-collar jobs. In Amodei's words, it cannot possibly be more than a few years before AI is better than humans at essentially everything, as long as the basic exponential continues.
He thinks the exponential could speed up as AI starts to automate the job of doing AI research itself. This would create a feedback loop gathering steam month by month and may only be one to two years away from a point where the current generation of AI autonomously builds the next.
There's a good chance all of this is coming in one to two years — and if not that, a very strong chance it comes in the next few years before 2030. This comes from a lab leader who has overseen a ten-times revenue growth year on year. Even in Silicon Valley, that's unprecedented growth for a company of his size.
First caveat: Amodei may be slightly exaggerating the pace of progress in coding. In the last two years, AI models went from barely being able to complete a single line of code to writing all or almost all of code for some people — including engineers at Anthropic. One of the first experiments with ChatGPT in November 2022 was getting it to write some code and creating a miniature fitness app, which felt amazing and really cool. It could write a single line of code. There were viral videos of coders going, "Oh my god, we're all going to be automated based on the original ChatGPT November 2022." That's three and a half years ago.
Writing all or almost all of the code? An estimate from an OpenAI engineer recently suggested their model Codeex — not far from Claude Code — was automating about twenty percent of their code. For Anthropic, it's probably around eighty percent. You'd be talking more in the range of eighty to ninety percent rather than one hundred.
Second caveat: The extrapolation from software engineering to jobs in finance, consulting, and law isn't impossible, but the feedback loops are longer. You overlook something in a legal contract and that might come back to bite you in three years — not three seconds or three minutes with unit tests in software engineering. If an AI model skips out on nuance while analyzing headcount for a consulting report for McKinsey or Bain, the negative ramifications may not play out until the medium term.
The Underclass Prediction
The second mega prediction: Amodei foresees an unemployed or very low-wage underclass of up to fifty percent of the population. You may have seen plenty of viral posts on Twitter or X about having only a few months to escape the permanent underclass.
Somewhat strangely, he thinks this will affect those of lower intellectual ability — which is harder to change — more than others. This is potentially quite toxic messaging for eighteen-year-olds or twenty-somethings: implying they have to scramble to make their wages in the next year or two. Forget the long-term, drop everything, maybe invest in crypto or start your own AI startup.
This isn't saying a permanent underclass is impossible — but it's the duty of all of us to add an almost health warning whenever this topic comes up. The smartest thing to do is to not discount the possibility of a rapid takeoff of capabilities. Lean into tools like Claude Code and Claude so you can see just how good they are and the mistakes that they still make. But don't bet your future on that imminent singularity.
Even if there's a one-third chance of this happening over the next, say, one to four years — what about the two-thirds chance that it doesn't? You're not being smarter than everyone else by seeing the singularity coming when everyone else is oblivious. The smartest thing to do is factor it in as a chance but not bet everything on it.
Third caveat: Notice again he places this displacement of half of all entry-level white-collar jobs as being within the next one to five years. Yet his prediction from almost nine months ago, reported in Axios, was for the next one to five years. It's not like he's now saying zero to four years.
There's one more place where he does this in the essay. Amodei is not alone. One of the other co-founders of Anthropic gave a fifty percent chance that in two to three years from now, even theoretical physicists will be mostly replaced with AI. That's Jared Kaplan. And I'm not sure how that quite fits into it affecting dumber people more than smarter people — but there we go.
The GDP Growth Claim
The other linked suggestion from a few paragraphs above: this could lead to ten to twenty percent sustained annual GDP growth rate. Looking at the language he uses in this sentence, it's remarkable hedging.
He says, "I suggest that this rate second may be third possible. That this rate, ten to twenty percent, may be possible." This is hedging in language to a degree that most thought wasn't possible. Why not just say that a ten to twenty percent growth rate is possible or predict that a ten to twenty percent growth rate might happen?
From the last sixty to seventy years of world GDP growth data: since the 1960s, there are spikes up to six percent but more regularly around four percent — sometimes down to two or even negative percent growth. Do you see the impact of the internet revolution, globalization, breaking down trade barriers, software, smartphones?
This is definitively not saying a ten to twenty percent growth rate might not be possible. But for a scientist like Dario Amodei, you'd need to supply some pretty compelling evidence to at least suggest that ten to twenty percent might be possible.
Amodei ends this part of the essay by saying the impact on labor will be a short-term shock unprecedented in size.
The Totalitarian Nightmare
The third mega prediction: AI will soon enable totalitarian nightmares. He thinks that may be the default outcome within China — although he gives plenty of hints that there's a risk in the US too.
You don't need to believe in super intelligence to foresee AI-based mass surveillance. A full documentary was done on artificial surveillance, and it's not just China. His scenarios go a step further: fully autonomous weapons and swarms of millions or billions of fully automated armed drones locally controlled by powerful AI and strategically coordinated across the world by an even more powerful AI.
This could be an unbeatable army. It could also suppress dissent by following around every citizen. If you thought you were safe on WhatsApp or another encrypted tool, here's news: Pegasus has been deployed in multiple countries. Amodei's right that some safeguards in democracies are gradually eroding. They might say they're developing them to fight autocracies — but like the immune system, there's some risk of them turning on us and becoming a threat themselves.
One recurring message he hammers again and again: the need to ban selling advanced chips to China. We should absolutely not be selling chips, chipmaking tools, or data centers to the CCP, Chinese Communist Party.
Fourth caveat: While the risks are pretty self-evident, that doesn't mean the conclusion is agreed upon. It deserves fair bit of caveatting. Insiders say if we didn't sell advanced chips to China, it would just accelerate development of their Huawei chips — China would more rapidly become self-sufficient in AI. And any notion of compute governance or monitoring where software might monitor what chips are doing would be completely out the window.
This is not to say the current on-and-off ban on China using advanced NVIDIA chips isn't having some effect. Even Chinese AI lab leaders are warning of a widening gap with the US specifically because of compute. Here's Justin Lynn, head of Alibaba Group Holding Limited, responsible for Qwen — arguably the best of the Chinese open-source series of models: "A massive amount of OpenAI's compute is dedicated to next-gen research, whereas we are stretched thin. Just meeting delivery demands consumes most of our resources." He added, "The chances of a Chinese company leapfrogging the likes of OpenAI and Anthropic are less than twenty percent over the next three to five years."
On the other hand, when Amodei says there is no reason to give a giant boost to the Chinese AI industry during this critical period — what's arguably the number one blocker for Anthropic continuing to ten-times their revenue year and for Amodei himself to become a trillionaire as he hints at in the essay? It would be China coming out with a model that can do much of what Claude Code could do or Claude 5 Opus or whatever comes out next — at one-tenth or one-hundredth the price.
For those who use Claude Code, if there was a model which was three percent worse but ten times cheaper, would you switch? There's been some mini virality with Kimmy K2.5 with a few million views on Twitter and their framework system — Kimmy Code now even according to Anthropic's own benchmarks is competitive.
The Counterarguments Worth Considering
Critics might note that Amodei's timeline has been remarkably consistent across essays without acknowledging how his predictions have shifted between them. Some of the strongest voices in AI safety are pushing for extreme caution on these claims — not because they doubt AI capabilities, but because the economic and social implications need far more rigorous analysis before broadcasting timelines to the public.
Google DeepMind CEO Demis Hassabis has said on those same scaling laws: they're going very well. Increased capabilities come from putting in more compute, more data, and making models generally larger — so that trend is continuing. But it may not be as fast as it was a couple of years ago. There's some talk of diminishing returns.
There's a big difference between no returns and exponential. We're somewhere in the middle where there's very good returns and that's worth doing on top of getting all the way to artificial general intelligence — but there may be one or two big innovations still needed, plus missing in addition to scaling up existing ideas.
Bottom Line
Amodei's core predictions are clear: AI will move from automating individual tasks to entire jobs, create massive unemployment, potentially accelerate GDP growth dramatically, and enable forms of surveillance and control that sound like dystopian fiction. The strongest part of his argument is the extrapolation from coding to all white-collar work — a logical progression if capabilities continue their current trajectory. His biggest vulnerability is the timeline: he's been predicting one to two years for transformative AI since 2024 without acknowledging how those predictions have shifted. Watch for whether Chinese competition actually narrows that gap — and whether his timeline holds or collapses under the weight of real-world feedback loops in law, medicine, and finance."]}