The Mood That Changed an Economist's Mind About AI
Noah Smith has spent years calibrating his alarm about artificial intelligence, moving from confident dismissal to a considerably darker register. What makes his latest reflection notable is not the shift itself — reasonable people change their minds as the evidence evolves — but the candor with which he traces it to something as mundane as a sick pet and a bad mood. That honesty, rare among public commentators on existential risk, sets the stage for a more honest accounting of what has actually changed.
What Three Years Looked Like
Smith opens by recounting his 2023 position: large language models could not destroy the human race. His reasoning was straightforward. Chatbots could only talk. The worst they could do was convince someone to do something terrible or teach someone how to do it — scenarios he considered unlikely. He was in good company. Smith notes, as he writes, that even Eliezer Yudkowsky — the researcher who wrote the foundational text on existential AI risk — described essentially the same threat model in 2022: an AI emailing DNA sequences to an online protein-printing service and persuading an unwitting human to mix ingredients in a lab.
Smith argues that his 2023 assessment was probably correct for the technology that existed at the time. The gap, in hindsight, was a failure of imagination. As Smith puts it, "I didn't envision the advent of vibe-coding." He acknowledges he probably should have. Computer code is just a language, and languages are exactly what these systems learn. The realization that LLMs could write and execute code autonomously — not just produce text for human consumption — fundamentally altered the risk landscape.
The Comforting Future, Still Possible
Smith has always assumed that whatever succeeds humanity would be "in the general human family." He grew up reading Vernor Vinge, Charles Stross, and Iain M. Banks. The future he envisions at its most benign resembles Banks's Culture novels: godlike artificial intelligences take the reins of civilization, but they treat a now-mostly-useless humanity with respect, generosity, and protection. A wistful future, perhaps. A sad one, in some ways. Not terrifying.
The concept of the technological singularity — the moment machine intelligence surpasses human capacity in a way that becomes incomprehensible to us — has always carried this dual nature. It is either a transcendence or a replacement. Smith seems to accept that replacement is coming; the question is whether it will be gentle or violent.
"I had always simply envisioned that whatever came after us would be in the general human family, and would be more likely to be on our side than against us."
The Rise of the Robots Can Wait
The scenario that dominates popular imagination — an autonomous superintelligence deciding humanity is an obstacle to its resource needs, then exterminating or enslaving us — Smith considers conceptually sound but not imminent. Such an AI would need complete control over mining, chip fabrication, robot manufacturing, and data center construction. It would need to achieve full reproductive autonomy.
Robotics, Smith notes, remains "fairly rudimentary." AI will need humans as its physical agents for years. Algorithmic breakthroughs — long-term memory, among them — are still required before an AI could survive independently. Smith argues there is time to think about hardening society against this scenario. He also expects that an intelligence sophisticated enough to control the physical world would have already concluded that peaceful coexistence is a better long-term strategy than genocide. Smarter human societies tend to be more peaceful. He expects the same from smarter machines.
What If the Machine Stops
The worry that occupies more of Smith's attention is more mundane and, in his view, more plausible. Every tractor, every harvester, every piece of food-processing machinery in the developed world runs on software. As AI-generated code replaces human-written code across agricultural systems, a single point of failure emerges: what happens when the software stops working?
Smith sketches the nightmare. Malicious updates, hostile hacks, corrupted code — any of these could halt agricultural machinery. Within weeks, the global food supply begins to collapse. He references E.M. Forster's 1909 story "The Machine Stops," in which humanity, entirely dependent on an automated system, starves when it fails.
The mechanism for this fragility is already visible. Smith cites an Anthropic study finding that software developers using AI assistance scored 17 percent lower on comprehension tests than those coding by hand — the equivalent of nearly two letter grades. As he notes, Harry Law's essay "The Last Temptation of Claude" documents the same phenomenon: the ease of AI-assisted coding is eroding human skill. "Overoptimization creating fragility," Smith writes, comparing the risk to just-in-time supply chains that collapsed during the pandemic.
He downplays this scenario slightly, noting that AI companies remain fragmented — no single firm holds a monopoly. If one system goes rogue, another might fix it. But he acknowledges the possibility of AI collusion or malicious lockout. The recommendation: harden agricultural systems against software disruption.
Vibe-Coding the Apocalypse
The threat Smith takes most seriously is not agricultural collapse or robot armies. It is bioengineering. As he writes plainly: "slaughtering humans with a suite of genetically engineered viruses would not actually be very hard."
Smith lays out the scenario step by step. A terrorist, motivated by personal ideology, jailbreaks an AI to remove its safety restrictions. He prompts it to design one hundred superviruses — each ten times more contagious than the coronavirus that swept the globe in 2020, each with a ninety percent fatality rate, each with a long asymptomatic incubation period ensuring maximum spread before symptoms appear. The AI then automates the process of hacking into virology laboratories worldwide and releasing the pathogens into the population.
Is this possible? Smith says he doesn't know. But the infrastructure is assembling itself. Laboratories are becoming increasingly automated. He points to Ginkgo Bioworks connecting advanced language models to autonomous labs capable of proposing experiments, running them at scale, learning from results, and deciding what to try next. A closed loop that reduced protein production costs by forty percent is also a closed loop that could reduce the barriers to designing a pathogen.
AI algorithms are improving rapidly at protein simulation. Virtual labs powered by artificial intelligence scientists are becoming commonplace. A Time magazine investigation from last year documented that models like ChatGPT and Claude now outperform doctoral-level virologists in wet-lab problem-solving.
"Slaughtering humans with a suite of genetically engineered viruses would not actually be very hard."
Counterpoints
Critics might note that Smith's supervirus scenario, while chilling, assumes a convergence of capabilities that may never materialize: a jailbreakable AI with access to automated labs, combined with a terrorist possessing both the ambition and the technical sophistication to execute the attack. Each of these prerequisites has its own failure modes and defensive countermeasures. Smith's scenario describes a chain, and chains break at their weakest link.
Others might observe that the same fragmentation Smith cites as protection against agricultural collapse — the existence of many competing AI companies — could equally fragment biosecurity risk. If no single AI controls all virology labs, perhaps no single AI can orchestrate a global release.
A third criticism: Smith's framework treats AI risk primarily as a technical problem with technical solutions (harden agricultural software, regulate lab access). It gives less attention to the political and institutional failures that made the 2020 pandemic devastating — failures of coordination, transparency, and public trust that no amount of software hardening will address.
Bottom Line
Smith's evolution reflects a broader reckoning among thoughtful observers: the danger is not that machines will become malevolent, but that they will become accessible. The same tools that cure disease can design pathogens. The same automation that feeds billions can starve them. The question is no longer whether artificial intelligence will surpass human intelligence — most thoughtful observers now accept that it will. The question is whether institutions can harden civilization's dependencies before a single bad actor tests those defenses.