← Back to Library

Roundup #80: All AI, all the time

Noah Smith cuts through the hype to reveal a startling disconnect: while experts agree AI will soon write novels and code autonomously, economists remain stubbornly skeptical that this will translate into rapid economic growth. This roundup doesn't just list news; it exposes a fundamental fracture in how we understand the future of work, security, and privacy in an age of accelerating intelligence.

The Growth Paradox

Smith opens by highlighting a new study from the Forecasting Research Institute that surveys economists, AI experts, and superforecasters. The results are counterintuitive. "There is widespread disagreement over the impact that AI will—or won't—have on the U.S. economy," Smith notes, yet the groups converge on the timeline for capability. He observes that "all the groups have about the same forecasts for AI capabilities by 2030," describing a future where AI can handle complex coding and household robotics.

Roundup #80: All AI, all the time

However, the economic predictions diverge sharply. Only AI experts foresee a major growth acceleration, and even they cap it at a modest 4 or 5 percent. Smith asks the critical question: "Why do economists think that even near-godlike AI wouldn't translate into fast growth?" He summarizes their reasoning: bottlenecks in energy, chip supply, and the historical lag seen with previous general-purpose technologies like electrification.

This analysis is compelling because it moves beyond the "AI will solve everything" narrative to the messy reality of implementation. Smith suggests a deeper, perhaps subconscious, reason for this pessimism: "One possibility... is that people suspect that humanity is getting satisfied, at least in the developed countries, and that the amount of new valuable things that even a godlike AI could create for us is limited by our inability to desire more goods and services."

"Basically, none of these groups thinks that any amount of AI capabilities will enable economic take-off."

Critics might argue that this view underestimates the disruptive power of a general-purpose technology to create entirely new categories of demand, much like the internet did. Yet, Smith's invocation of the "productivity paradox"—where massive tech investments yield slow GDP gains for decades—grounds the argument in historical precedent, reminding us that innovation does not automatically equal immediate economic explosion.

The Biosecurity and Cyber Threats

The tone shifts from economic theory to existential risk as Smith tackles the potential for AI to weaponize biology. He admits his own fear: "I'm worried that some nihilistic, depressed teenager could tell a jailbroken version of Claude Code to make him a doomsday virus, and that the AI would actually go and do it for him." While he cites biosecurity expert Abhishaike Mahajan, who argues that creating a functional bioweapon is inherently difficult due to unknown variables, Smith remains unconvinced.

He counters the optimism by pointing out the scale of experimentation AI enables. "Instead of just making one doomsday virus you can make 100 candidates and release them all," Smith writes. "Doomsday itself is the field experiment, and you can run a lot of experiments at once." This reframing of the threat from a singular event to a statistical probability is a crucial distinction for policymakers.

The danger extends to digital infrastructure. Smith highlights research showing that "offensive cyber capability has been doubling every 9.8 months since 2019," a rate that is accelerating. He connects this to recent breakthroughs in quantum computing, noting that "the entire modern world runs on cybersecurity — if there's a general failure in the methods we now use to keep information secure, all of society is in deep trouble."

"A truly well-engineered doomsday virus will kill us long before we can distribute the cure or give everyone a UV zapper."

The argument here is stark: our defensive measures are reactive, while our offensive capabilities are becoming proactive and automated. Smith's comparison of the quantum computing announcements to the Frisch and Peierls calculation for the atomic bomb underscores the gravity of the situation, suggesting that the window to secure our digital and biological infrastructure is closing faster than we realize.

The End of Anonymity and the Rent-Seeking Economy

Smith then turns to the social fabric of the internet, predicting the "end of pseudonymity." He cites a new paper showing that large language models can "re-identify Hacker News users and Anthropic Interviewer participants at high precision, given pseudonymous online profiles and conversations alone." The implication is that the "practical obscurity protecting pseudonymous users online no longer holds."

This loss of anonymity could have profound societal effects. Smith warns that "less pseudonymity might also close off an important social and psychological safety valve," particularly for cultures where public expression is heavily constrained. The internet may become safer from toxicity but poorer in honest, dissenting discourse.

Finally, Smith explores a less discussed economic risk: AI-driven rent-seeking. He references Charles Stross's Accelerando, where AI quants consume the solar system's resources for financial gaming. "A lot of society's resources — compute, electricity, and so on — will be going to waste," he argues, if AI agents are deployed en masse to beat each other to the punch in zero-sum financial games rather than creating value.

"If all records of personal wealth were erased in a cyberattack, what could banks or the government even do?"

This section highlights a potential tragedy of the commons in the digital age. While the focus is often on AI creating value, Smith forces the reader to consider the scenario where AI is used to extract value or destroy it. The reference to Hirshleifer's 1971 model of wasteful competition adds academic weight to the fear that we might be building a hyper-efficient engine for economic waste.

Bottom Line

Smith's strongest contribution is his refusal to accept the binary of "AI utopia" or "AI apocalypse," instead mapping a complex landscape of bottlenecks, existential risks, and social erosion. The piece's greatest vulnerability is its reliance on expert consensus for growth forecasts, which may prove wrong if AI triggers a paradigm shift that current economic models cannot yet capture. Readers should watch for how the administration and private sector respond to the accelerating pace of cyber and bio-threats, as the window for proactive defense is rapidly shrinking.

Deep Dives

Explore these related deep dives:

Sources

Roundup #80: All AI, all the time

by Noah Smith · Noahpinion · Read full article

I promise I’ll write something soon about the flaming, crashing disaster that is the Trump administration — and about other topics of interest. But before I do that, here’s a roundup full of short takes and stories about AI.

First, though, an episode of Econ 102! Officially the podcast is over, but we still occasionally do a reprise episode. This one, fittingly, is about AI biosecurity:

Anyway, here are six other interesting AI-related items:

1. Forecasting the effect of AI on growth.

No one really knows what effect AI is going to have on economic growth, but maybe each “expert” knows a tiny, tiny bit. And maybe, if you combine all of those weak signals, you can get some actual information about the economic effects of AI.

That’s the idea behind a new study by the Forecasting Research Institute. They survey a whole bunch of different people about what they think AI’s capabilities will be in the future, and what that implies for economic growth. Specifically, the groups they survey are:

Economists

AI experts

Superforecasters

The general public

The results are kind of surprising, actually:

For one thing, all the groups have about the same forecasts for AI capabilities by 2030:

This looks like a forecast of modest progress, but it’s not. The “moderate” scenario here would have AI able to write high-quality novels, handle coding tasks that would take humans five days, create semi-autonomous labs, and use robots to perform basic household tasks. So basically, every group of forecasters in this survey thinks stunning AI progress is likely over the next few years.

And yet of all the groups, only the AI experts predict a major growth acceleration in any of these scenarios — and even then, it’s only an acceleration to 4 or 5 percent, not to the 10 or 20 percent scenarios that some people have thrown around:

Why do economists think that even near-godlike AI wouldn’t translate into fast growth? The Forecasting Research Institute lists some of their reasons:

Some economists argued that AI productivity gains would not be evenly distributed across all sectors, particularly where human labor is a bottleneck. Others pointed out that with other general-purpose technologies (electrification, automobiles, personal computers), there were multi-decade lags between widespread implementation and productivity improvements. Part of this delay is attributed to a shift in capital away from labor and toward compute, data centers, APIs, and so on, ...