← Back to Library

AI as normal technology

In a landscape saturated with apocalyptic warnings and utopian hype, Arvind Narayanan & Sayash Kapoor offer a startlingly mundane thesis: artificial intelligence is not a new species, but merely normal technology. This perspective is not a dismissal of AI's power, but a strategic reframing that suggests the most dangerous narratives are the ones that treat AI as an autonomous agent beyond human control. For busy professionals trying to navigate the next decade, this distinction is the difference between panic and preparation.

The Speed Limit of Reality

The authors dismantle the prevailing fear of an imminent "singularity" by introducing a critical triad: invention, innovation, and adoption. They argue that while the invention of new AI methods moves fast, the translation of those methods into real-world applications and their subsequent diffusion through society is glacial. "We view AI as a tool that we can and should remain in control of, and we argue that this goal does not require drastic policy interventions or technical breakthroughs," they write. This is a bold claim in an era demanding emergency legislation, yet it rests on a solid historical foundation.

AI as normal technology

Narayanan & Kapoor point out that in safety-critical fields like healthcare and criminal justice, the use of complex, modern machine learning is surprisingly rare. Instead, decades-old statistical techniques dominate because they are interpretable and auditable. They illustrate this with the failure of Epic's sepsis prediction tool, which missed two-thirds of cases because it relied on a feature—whether a doctor had already prescribed antibiotics—that was causally linked to the outcome but unavailable at the time of prediction. "Epic's sepsis prediction tool failed because of errors that are hard to catch when you have complex models with unconstrained feature sets," they note. This evidence suggests that the "black box" nature of advanced AI creates a natural speed limit on its deployment in high-stakes environments.

The impact of AI is materialized not when methods and capabilities improve, but when those improvements are translated into applications and are diffused through productive sectors of the economy.

This framing is effective because it shifts the blame from the algorithm itself to the messy, slow process of organizational adaptation. Critics might argue that this view underestimates the speed at which generative AI is already reshaping creative and administrative workflows, but the authors counter that usage intensity matters more than headline adoption rates. They cite data showing that while 40% of U.S. adults have tried generative AI, it accounts for less than 1% of total work hours. The bottleneck, they insist, is not the technology's capability, but the human and institutional inertia required to integrate it safely.

The Ladder of Generality and the Myth of Autonomy

Moving beyond adoption speeds, the authors tackle the conceptual error of treating AI as a form of human-like intelligence. They propose a "ladder of generality" where each rung reduces the effort needed to program a computer for a specific task, but they warn against assuming this ladder leads to an all-powerful, autonomous entity. "To view AI as normal is not to understate its impact—even transformative, general-purpose technologies such as electricity and the internet are 'normal' in our conception," they explain. This comparison to electricity is powerful; just as the full economic benefits of electrification took forty years to materialize because factories had to be redesigned from the ground up, AI will likely require similar structural overhauls.

The authors reject the notion of "fast takeoff" scenarios, arguing that the external world imposes hard constraints on innovation. In domains like self-driving cars, progress is not a smooth exponential curve but a feedback loop of real-world testing and data collection that is inherently slow and expensive. "We do not think that viewing AI as a humanlike intelligence is currently accurate or useful for understanding its societal impacts, nor is it likely to be in our vision of the future," they assert. This is a crucial pivot: by removing the anthropomorphic lens, we stop fearing a robot rebellion and start addressing the very real, very human risks of inequality, bias, and labor displacement.

Drastic interventions premised on the difficulty of controlling superintelligent AI will, in fact, make things much worse if AI turns out to be normal technology.

Here, the authors' policy prescription becomes clear. They advocate for resilience and uncertainty reduction rather than preemptive bans or existential risk mitigation strategies designed for a superintelligence that may never arrive. A counterargument worth considering is that this "normalization" might lead to complacency, allowing harmful applications to slip through regulatory cracks under the guise of incrementalism. However, the authors maintain that existing regulatory frameworks, like the EU AI Act, are already creating necessary friction that prevents reckless deployment.

Bottom Line

Arvind Narayanan & Sayash Kapoor provide a necessary corrective to the fever dream of AI exceptionalism, grounding the conversation in the slow, grinding reality of technological diffusion. Their strongest argument is that the greatest risks of AI will mirror the historical downsides of previous industrial revolutions—inequality and institutional failure—rather than sci-fi nightmares of machine dominance. The biggest vulnerability in their thesis is the potential for a sudden, unforeseen breakthrough that could shatter the assumption of slow diffusion, but until then, their call for pragmatic, human-centric governance offers the most reliable path forward.

The statement 'AI is normal technology' is three things: a description of current AI, a prediction about the foreseeable future of AI, and a prescription about how we should treat it.

Sources

AI as normal technology

by Arvind Narayanan & Sayash Kapoor · AI Snake Oil · Read full article

This post is over 15,000 words long—it is a new paper on our vision for the future of AI. We are pleased to announce that an expanded version of these ideas will become our next book together.

The paper is also published in HTML and PDF formats on the Knight First Amendment Institute’s website. We are grateful for the extensive feedback we’ve received on drafts of the paper.

Update (September 2025): We have published a companion to this essay titled A guide to understanding AI as normal technology.

We articulate a vision of artificial intelligence (AI) as normal technology. To view AI as normal is not to understate its impact—even transformative, general-purpose technologies such as electricity and the internet are “normal” in our conception. But it is in contrast to both utopian and dystopian visions of the future of AI which have a common tendency to treat it akin to a separate species, a highly autonomous, potentially superintelligent entity.1

The statement “AI is normal technology” is three things: a description of current AI, a prediction about the foreseeable future of AI, and a prescription about how we should treat it. We view AI as a tool that we can and should remain in control of, and we argue that this goal does not require drastic policy interventions or technical breakthroughs. We do not think that viewing AI as a humanlike intelligence is currently accurate or useful for understanding its societal impacts, nor is it likely to be in our vision of the future.2

The normal technology frame is about the relationship between technology and society. It rejects technological determinism, especially the notion of AI itself as an agent in determining its future. It is guided by lessons from past technological revolutions, such as the slow and uncertain nature of technology adoption and diffusion. It also emphasizes continuity between the past and the future trajectory of AI in terms of societal impact and the role of institutions in shaping this trajectory.

In Part I, we explain why we think that transformative economic and societal impacts will be slow (on the timescale of decades), making a critical distinction between AI methods, AI applications, and AI adoption, arguing that the three happen at different timescales.

In Part II, we discuss a potential division of labor between humans and AI in a world with advanced AI (but not “superintelligent” AI, which we ...