Most technologists see artificial intelligence as the ultimate accelerator for scientific discovery, promising to cure diseases and colonize Mars within a decade. Arvind Narayanan and Sayash Kapoor, however, argue that this rush to automate could actually strangle the very progress we hope to achieve, turning science into a factory of noise rather than a forge of truth.
The Production-Progress Paradox
The authors begin by dismantling a comforting assumption: that more papers and more funding automatically equal faster breakthroughs. They point to a stark disconnect in the data. "The rate of publication of scientific papers has been growing exponentially, increasing 500 fold between 1900 and 2015. But actual progress, by any available measure, has been constant or even slowing." This observation is not merely a statistical curiosity; it is the foundation of their entire warning. While the volume of research output has exploded, the rate of genuine, paradigm-shifting discoveries has flatlined.
Narayanan and Kapoor lean heavily on the emerging field of metascience to prove that we are hitting a wall. They cite findings that "disruptive scientific work represents an ever-smaller fraction of total scientific output," suggesting that the scientific community is increasingly obsessed with incremental tweaks rather than revolutionary ideas. The evidence is compelling because it spans multiple metrics, from the stagnation of new terminology in paper titles to the declining fraction of Nobel Prize-winning work published in the preceding two decades. "Despite the vast increases in funding, published papers, and authors, the most important breakthroughs today are about as impressive as those in the decades past." This is a damning indictment of the current system, suggesting that our inputs are no longer translating into meaningful outputs.
We are adding lanes to a highway when the slowdown is actually caused by a toll booth.
Critics might argue that measuring "progress" is inherently subjective and that some fields, like biology, simply require more time to yield results than others. However, the authors counter that even when adjusting for these variables, the decline in research productivity is evident across diverse sectors, from semiconductors to agriculture. The data suggests that the system itself is the bottleneck.
The Trap of Incentives
Why is this happening? The authors reject the fatalistic idea that we have simply run out of "low-hanging fruit." Instead, they argue that the structure of modern academia is actively hostile to breakthrough thinking. The core problem is a misalignment of incentives. Because "production is easy to measure, and progress is hard to measure," universities and funding bodies judge researchers based on the number of papers they publish rather than the quality of their insights. This creates a feedback loop where scientists are rewarded for playing it safe.
Narayanan and Kapoor illustrate this with a sobering example: "Physics Nobel winner Peter Higgs famously noted that he wouldn't even have been able to get a job in modern academia because he wouldn't be considered productive enough." In a system that demands constant output, the slow, risky, and often solitary work required for a true breakthrough becomes a career suicide pact. The result is a homogenization of research, where scientists focus on "experimenting with known molecules that are already considered important" rather than exploring the unknown.
This is where the introduction of AI becomes particularly dangerous. If the current system already prioritizes quantity over quality, automating the production of papers will only accelerate the deluge. "AI could make individual researchers more creative but decrease the creativity of the collective because of a homogenizing effect." By lowering the barrier to entry for generating papers, AI could drown out the few truly novel ideas in an ocean of mediocre, algorithmically generated text. The authors warn that "the rapid flow of new papers can force scholarly attention to already well-cited papers and limit attention for less-established papers."
Science Is Not Ready for Software
The authors make a crucial distinction that is often missed in the hype cycle: science is not a market, and it does not react to technological shocks the way a business does. While markets might efficiently reallocate resources when a new technology arrives, the scientific enterprise is a complex system with emergent properties that can behave unpredictably. "So far, on balance, AI has been an unhealthy shock to science, stretching many of its processes to the breaking point."
The fear is that AI will prolong the reliance on flawed theories by making it easier to generate data that supports the status quo. "Any serious attempt to forecast the impact of AI on science must confront the production-progress paradox." If we simply use AI to speed up the current broken model, we will not get faster progress; we will get faster failure. The authors argue that we are currently "oblivious to what the actual bottlenecks to scientific progress are," focusing on accelerating production when the real issue is the quality of attention and the courage to take risks.
Human understanding remains essential, and no amount of automation can replace the need for deep, critical engagement with the canon.
Bottom Line
Narayanan and Kapoor deliver a vital corrective to the techno-optimist narrative, proving that more data and faster tools do not guarantee wisdom. Their strongest argument lies in exposing how the current incentive structure rewards mediocrity, a flaw that AI threatens to magnify rather than fix. The biggest vulnerability in their case is the difficulty of proving exactly how much AI will worsen the problem before it is fully deployed, but the warning is clear: without reforming the underlying system of scientific evaluation, AI will not cure our stagnation; it will only make the noise louder.