← Back to Library

We're not ready for artificial general intelligence - will MacAskill

Will MacAskill, a leading voice in effective altruism, delivers a chilling verdict in this conversation with Alex O'Connor: humanity is not merely unprepared for Artificial General Intelligence (AGI), it is actively racing toward a precipice with its eyes closed. While the public debate fixates on sci-fi scenarios of robot uprisings, MacAskill argues that the most immediate dangers are far more mundane, structural, and economically driven. This piece is notable not for predicting the end of the world, but for exposing the terrifying mismatch between the trillions of dollars fueling AI development and the negligible resources dedicated to managing its consequences.

The Asymmetry of Investment

The core of MacAskill's argument rests on a stark economic reality: the incentives driving AI development are misaligned with human safety. He points out that while tech giants pour "tens to hundreds of billions of dollars" into building systems that can do anything a human can do cognitively, the movement worried about the risks is "tiny in comparison." MacAskill notes that leaders of major companies often speak narrowly of benefits like "improvements in medicine" or "greater economic prosperity," while the broader existential risks are largely ignored by the very entities creating the technology.

We're not ready for artificial general intelligence - will MacAskill

This framing is effective because it moves the conversation away from abstract philosophy and into the hard mechanics of capital. MacAskill argues that there is simply "no money in AI safety," whereas the prize for being the first to achieve AGI is to become "the by far the richest and most powerful company in the world." The economic stakes create a race to the bottom on safety, a dynamic MacAskill compares to the early days of the climate crisis, where the ratio of spending between big oil and the environmental movement was "hundreds of thousands to one."

Critics might note that this comparison to climate change is imperfect; unlike carbon emissions, AI development is not a byproduct of an existing industry but a deliberate, accelerated sprint. However, the parallel regarding the sheer scale of corporate power versus the scarcity of regulatory oversight remains compelling.

The Spectrum of Catastrophe

MacAskill expands the definition of "disastrous outcomes" beyond the popular fear of a rogue AI turning on humanity. He categorizes the risks into four distinct buckets, moving from the theoretical to the immediately practical. The first is the classic "loss of control" scenario, where an AI system "start[s] pursuing their own goals" rather than humanity's. But he quickly pivots to more tangible threats, such as the "acceleration of extremely dangerous technology."

He highlights the democratization of bioweapons as a primary concern. In the past, creating a new disease required "PhD supervision" and specialized labs; with advanced AI, the bottleneck shifts entirely to knowledge, making it accessible to almost anyone. "The ability to create new diseases will become increasingly democratized," MacAskill warns, noting that the main barrier is no longer physical apparatus but information. Similarly, he envisions a future of warfare involving "billions or trillions of mosquitoesized flying autonomous drones," a dystopian vision that relies on current technological trajectories rather than speculative fiction.

"Getting to that point within the next 10 years even within the next 5 years is totally on the table and that means we'll just need to be much more nimble."

The argument gains urgency here. MacAskill suggests that unlike climate change, which unfolded over decades allowing for slow-moving political movements, AGI could arrive in a timeframe that renders traditional governance obsolete. He argues that if society hopes to steer this technology toward positive outcomes, the response must be "much faster growing than other movements that we've seen in history."

The Concentration of Power

Perhaps the most under-discussed risk MacAskill identifies is the potential for extreme power concentration. He posits that as AI automates human labor, the owners of capital and data centers will no longer need the workforce, leading to a progressive undermining of democratic institutions. "There's very little in the way of possible push back," he observes, as the elites gain access to superior strategic advisers and the ability to stage coups with unprecedented ease.

This section of the commentary is particularly striking because it addresses the erosion of human agency without invoking a single robot rebellion. The danger lies in the "discrepancies in who has access to the most powerful AI systems," allowing a small group to consolidate power while the rest of humanity is rendered economically irrelevant. MacAskill also touches on the cognitive threat: the risk that humanity becomes "overwhelmed by the sheer wave of new ideas" or falls prey to mass propaganda generated by AI systems designed to tell people "what you want to hear."

Critics might argue that MacAskill underestimates the resilience of democratic institutions or the potential for open-source AI to democratize power rather than concentrate it. Yet, his focus on the speed of technological change suggests that institutional resilience may simply be too slow to keep pace.

Bottom Line

MacAskill's strongest contribution is his refusal to treat AI safety as a philosophical afterthought, framing it instead as an urgent economic and political crisis where the incentives are fundamentally broken. The argument's greatest vulnerability lies in its reliance on the assumption that current accelerationist trends will continue unchecked, ignoring potential regulatory breakthroughs or technical bottlenecks. Readers should watch for whether the "nimble" global response MacAskill demands can actually materialize before the next five years close the window on human control.

"There doesn't I can't think of like an obvious financial incentive to set up some kind of organization that tells everyone to, you know, slow down technological growth."

Sources

We're not ready for artificial general intelligence - will MacAskill

by Alex O'Connor · Cosmic Skeptic · Watch video

Are we ready for artificial general intelligence? >> that's a great question and I think the answer is very clearly no. I think yeah the transition from where we are now to AI systems that can do anything cognitively speaking that a human can do and then from there beyond that point is going to be one of the most momentous transitions in all of human history. It will bring a huge range of changes to the world and almost no effort is going into preparing for these changes even though some of the biggest companies in the world are trying to make this happen and have this as their explicit aim.

>> I'm interested to hear you say that because from my perspective I don't know anything about the technologies behind artificial intelligence. I don't really understand what a how a LLM really works. I don't know how to code a software or anything like that. So I only ever hear about this really from a sort of ethical philosophical perspective.

And it kind of feels like that's all anybody's ever talking about AGI and it's going to take over the world and that there's going to be job losses and all of this kind of stuff. I think people are sort of talking about that a lot like in my in my sphere. But do you mean to say that as far as actual like practical efforts go that isn't mirrored in like that policy planning and effective campaigning to actually try to put a stop to disastrous outcomes. >> yeah well I think there's a few different categories of people.

So there are the people who are trying to build AGI that's open AAI and Google deep mind and some other companies and collectively they are spending tens to hundreds of billions of dollars on investment to try to make that happen. Sometimes the leaders of those companies talk about, oh, all the good things that AI will be able to do. It's normally really quite narrow, focused on improvements in medicine, perhaps greater economic prosperity. then there's a second category of people who tend to be primarily worried about loss of control to AI systems.

this is categories of people who are talking about AGI and there the amounts of people numbers of people and amounts of money are tiny in comparison to the investment in ...