In an era of breathless hype, this piece from Works in Progress offers a necessary reality check: the path to curing disease is not a straight line paved by algorithms, but a jagged terrain where biology, data, and economics collide. While Silicon Valley dreams of a decade-long sprint to eradicate all illness, the editors argue that we are still stumbling through the dark, unable to predict much of anything in biology despite our computational prowess. This is not a dismissal of artificial intelligence, but a sophisticated map of where it actually works and where it hits the hard wall of human complexity.
The Illusion of Solved Problems
The piece opens by confronting the most seductive narrative of our time: that AI is on the verge of solving medicine. It cites Demis Hassabis, CEO of DeepMind, who recently claimed, "I think one day maybe we can cure all disease with the help of AI. I think that's within reach, maybe within the next decade or so." Works in Progress does not simply report this optimism; it immediately juxtaposes it with the sobering reality of biological history. The editors note that even in fields where we thought we had won, the victory was often an illusion.
The argument draws a sharp parallel to the early 2000s, when computational protein design seemed to be on the brink of total success. A key insight from the text highlights the disconnect between benchmark wins and real-world application: "I remember in the early 2000s, David Baker was revolutionizing computational protein design with his Rosetta software suite... Yet here we are 20 years later. All of these topics are still active areas of research, and if you have any particular system of interest, you may find that none of the available methods perform that well." This historical context is crucial. It suggests that the current AI boom may be repeating a pattern where we solve the easy, data-rich problems first, only to find the remaining challenges significantly harder.
We haven't really modeled the whole complexity of cells, organs, and organisms as a whole.
The editors explain that the success of tools like AlphaFold has largely been confined to soluble proteins—those floating freely in water. But the human body is a messy, crowded place. As the piece argues, "In nature, proteins might be wiggling around; they might be attached to a membrane, or they might be attached to some drug." This distinction is vital. It moves the conversation from abstract data science to the physical constraints of the human body, where the rules of physics and chemistry are far less forgiving than in a simulation. Critics might note that this view underestimates the exponential scaling of AI models, but the historical precedent of "solved" fields remaining unsolved is a powerful counterweight to blind faith in algorithmic acceleration.
Two Worldviews on Progress
The commentary then pivots to a fascinating clash of cultures: the techno-optimism of Silicon Valley versus the cautious empiricism of practicing scientists. The piece outlines the first worldview, common in places like San Francisco, which posits that if we can build systems that reason better than humans, we can simply outsource the scientific method to them. Jacob Trefethen, a contributor to the piece, describes this vision: "If we're about to invent systems that can reason well, debate with us, and debate with each other, instead of having hundreds of thousands of working scientists alive at any time working on discovering the nature of the universe, we can have hundreds of millions maybe, but they are AI agents."
This perspective assumes that the bottleneck is purely intellectual—that we just need more smart agents to figure out the code of life. However, Works in Progress contrasts this with the second, more grounded worldview: that human biology is fundamentally difficult to measure and even harder to manipulate without causing harm. The editors introduce the concept of Eroom's law (Moore's law backwards) to illustrate that drug development is getting harder and more expensive over time, not easier. "The bottlenecks aren't mostly in discovery where AI might help; they're in off-target effects and toxicity from these drugs," the piece argues. This reframing is essential. It shifts the focus from the excitement of discovery to the grim reality of clinical trials, manufacturing, and health system access.
We have the reverse in drug development. We have what people sometimes call Eroom's law- The reverse. E-R-O-O-M.
The editors suggest that these two groups—the AI optimists and the biological realists—rarely communicate effectively. This disconnect is a major vulnerability in the current discourse. If we assume AI will solve the discovery phase, we might ignore the fact that the real cost of medicine lies in the years of testing and the regulatory hurdles that follow. The piece implies that without addressing the economic and systemic blocks, even a perfect AI drug discovery engine might not lead to cheaper or faster cures.
The Empirical Roots of Innovation
To illustrate that understanding the mechanism of a disease is not always a prerequisite for curing it, the piece dives into historical case studies that defy the modern "rational drug design" model. It recounts how Edward Jenner developed the smallpox vaccine in 1796 not by understanding immunology, but by observing that dairy maids who had cowpox were protected from smallpox. "He collected data on all of these different individuals who had contracted cowpox at some point and after that had been protected from a smallpox outbreak," the editors note. This was early epidemiological analysis, driven by observation rather than theory.
Similarly, the discovery of artemisinin, a critical malaria drug, is highlighted as a triumph of high-throughput screening over theoretical understanding. In the 1960s, Chinese scientist Tu Youyou sifted through 2,000 ancient medical texts to find a potential treatment, eventually isolating the compound from sweet wormwood. "She then narrowed down all of those thousands to a few hundred, and then tested some dozens of them in the lab and tested them in animals and people," the piece reports. This historical parallel is striking. It suggests that the future of drug discovery might look less like a supercomputer simulating molecular bonds and more like a massive, AI-enhanced version of Tu Youyou's search through ancient texts.
It's so cool to do both — to use the wisdom of the ancients and the tools of modern science.
The editors use these examples to argue that AI's greatest role might be in accelerating this empirical process—screening millions of compounds or analyzing vast datasets of case reports—rather than replacing the need for experimentation. The modern refinement of these ancient discoveries, such as tweaking artemisinin to be more bioavailable, shows that the "black box" approach of finding a working compound first and understanding it later is still a valid, and perhaps necessary, strategy. This challenges the notion that we must fully understand the biology of a disease before we can treat it.
Bottom Line
The strongest part of this argument is its refusal to treat AI as a magic wand, grounding the discussion in the messy, expensive, and historically slow reality of drug development. By contrasting the "Eroom's law" of biology with the exponential growth of computing, the editors provide a necessary check on the hype cycle. However, the piece's biggest vulnerability is its reliance on historical analogies that may not fully account for the unique capabilities of generative AI in predicting complex biological interactions that have never been observed. The reader should watch for how the industry balances the promise of rapid discovery with the unyielding costs of clinical validation and manufacturing.
We haven't really modeled the whole complexity of cells, organs, and organisms as a whole.