Will MacAskill, a leading voice in effective altruism, delivers a chilling verdict in this conversation with Alex O'Connor: humanity is not merely unprepared for Artificial General Intelligence (AGI), it is actively racing toward a precipice with its eyes closed. While the public debate fixates on sci-fi scenarios of robot uprisings, MacAskill argues that the most immediate dangers are far more mundane, structural, and economically driven. This piece is notable not for predicting the end of the world, but for exposing the terrifying mismatch between the trillions of dollars fueling AI development and the negligible resources dedicated to managing its consequences.
The Asymmetry of Investment
The core of MacAskill's argument rests on a stark economic reality: the incentives driving AI development are misaligned with human safety. He points out that while tech giants pour "tens to hundreds of billions of dollars" into building systems that can do anything a human can do cognitively, the movement worried about the risks is "tiny in comparison." MacAskill notes that leaders of major companies often speak narrowly of benefits like "improvements in medicine" or "greater economic prosperity," while the broader existential risks are largely ignored by the very entities creating the technology.
This framing is effective because it moves the conversation away from abstract philosophy and into the hard mechanics of capital. MacAskill argues that there is simply "no money in AI safety," whereas the prize for being the first to achieve AGI is to become "the by far the richest and most powerful company in the world." The economic stakes create a race to the bottom on safety, a dynamic MacAskill compares to the early days of the climate crisis, where the ratio of spending between big oil and the environmental movement was "hundreds of thousands to one."
Critics might note that this comparison to climate change is imperfect; unlike carbon emissions, AI development is not a byproduct of an existing industry but a deliberate, accelerated sprint. However, the parallel regarding the sheer scale of corporate power versus the scarcity of regulatory oversight remains compelling.
The Spectrum of Catastrophe
MacAskill expands the definition of "disastrous outcomes" beyond the popular fear of a rogue AI turning on humanity. He categorizes the risks into four distinct buckets, moving from the theoretical to the immediately practical. The first is the classic "loss of control" scenario, where an AI system "start[s] pursuing their own goals" rather than humanity's. But he quickly pivots to more tangible threats, such as the "acceleration of extremely dangerous technology."
He highlights the democratization of bioweapons as a primary concern. In the past, creating a new disease required "PhD supervision" and specialized labs; with advanced AI, the bottleneck shifts entirely to knowledge, making it accessible to almost anyone. "The ability to create new diseases will become increasingly democratized," MacAskill warns, noting that the main barrier is no longer physical apparatus but information. Similarly, he envisions a future of warfare involving "billions or trillions of mosquitoesized flying autonomous drones," a dystopian vision that relies on current technological trajectories rather than speculative fiction.
"Getting to that point within the next 10 years even within the next 5 years is totally on the table and that means we'll just need to be much more nimble."
The argument gains urgency here. MacAskill suggests that unlike climate change, which unfolded over decades allowing for slow-moving political movements, AGI could arrive in a timeframe that renders traditional governance obsolete. He argues that if society hopes to steer this technology toward positive outcomes, the response must be "much faster growing than other movements that we've seen in history."
The Concentration of Power
Perhaps the most under-discussed risk MacAskill identifies is the potential for extreme power concentration. He posits that as AI automates human labor, the owners of capital and data centers will no longer need the workforce, leading to a progressive undermining of democratic institutions. "There's very little in the way of possible push back," he observes, as the elites gain access to superior strategic advisers and the ability to stage coups with unprecedented ease.
This section of the commentary is particularly striking because it addresses the erosion of human agency without invoking a single robot rebellion. The danger lies in the "discrepancies in who has access to the most powerful AI systems," allowing a small group to consolidate power while the rest of humanity is rendered economically irrelevant. MacAskill also touches on the cognitive threat: the risk that humanity becomes "overwhelmed by the sheer wave of new ideas" or falls prey to mass propaganda generated by AI systems designed to tell people "what you want to hear."
Critics might argue that MacAskill underestimates the resilience of democratic institutions or the potential for open-source AI to democratize power rather than concentrate it. Yet, his focus on the speed of technological change suggests that institutional resilience may simply be too slow to keep pace.
Bottom Line
MacAskill's strongest contribution is his refusal to treat AI safety as a philosophical afterthought, framing it instead as an urgent economic and political crisis where the incentives are fundamentally broken. The argument's greatest vulnerability lies in its reliance on the assumption that current accelerationist trends will continue unchecked, ignoring potential regulatory breakthroughs or technical bottlenecks. Readers should watch for whether the "nimble" global response MacAskill demands can actually materialize before the next five years close the window on human control.
"There doesn't I can't think of like an obvious financial incentive to set up some kind of organization that tells everyone to, you know, slow down technological growth."