The mathematician Terence Tao has a radical idea about artificial intelligence: the technology has made idea generation almost free, but that might actually be creating a new problem rather than solving one.
In a wide-ranging interview, Tao argues that AI has dramatically lowered the cost of generating hypotheses — essentially the "eureka" moment where scientists propose new theories. But verification and validation have become the new bottleneck. The irony is striking: we can now produce thousands of theories at virtually no cost, yet we lack the infrastructure to separate signal from noise at scale.
This shift mirrors a historical turning point in science that Tao explores through the story of Johannes Kepler.
The Kepler Parallel
In the early 17th century, astronomer Johannes Kepler cycled through dozens of proposed theories about how planets move. He tried relationships based on geometric shapes — Platonic solids nested between planetary spheres — and various "musical" harmonies of the cosmos. Most failed.
What saved his work was data.
Danish astronomer Tycho Brahe had spent decades collecting precise observations of planetary positions, creating a dataset orders of magnitude more accurate than anything before it. When Kepler finally accessed this data after a contentious struggle with Brahe's descendants, he could test his theories against real evidence.
"He worked out the actual orbits," Tao explains. "And that was an incredibly clever piece of data analysis."
The result was three laws of planetary motion — elliptical orbits, equal areas swept over equal time, and a relationship between orbital period and distance from the sun. But Kepler had no explanation for why these laws worked. It took Isaac Newton, roughly a century later, to provide the theoretical framework: inverse-square gravitational acceleration.
The lesson? Kepler's theories were only as valuable as the data that verified them.
The Data Revolution Reversed
Tao draws an analogy between Kepler's empirical approach and modern AI capabilities.
"Traditionally when we talk about the history of science, idea generation has always been the prestige part," Tao said. "But nowadays I'm not sure hypothesis generation is the bottleneck anymore."
The traditional scientific method involved identifying a problem, collecting data, forming hypotheses, and testing them against observations. This process was slow, expensive, and human-dependent.
Now, AI can generate thousands of potential theories for any given problem almost instantaneously. The cost of idea generation has collapsed to near-zero — similar to how the internet dramatically lowered the cost of communication.
But this creates a new challenge: we now need systems capable of verifying which generated hypotheses are actually correct.
"We have to change our structures of science to sort this out," Tao argues. "Traditionally we built peer review and publication systems to filter out low-value ideas. But now we're generating a thousand theories per day — and human reviewers are already overwhelmed."
Tao points out that many journals report being flooded with AI-generated submissions, making traditional filtering insufficient.
What This Means for Scientific Progress
The historical parallel is instructive: Kepler succeeded not because of brilliant idea generation alone, but because he had access to Tycho Brahe's meticulous data. The verification was essential.
"We should also celebrate Brahe," Tao notes. "His data was ten times more precise than previous observations. That extra decimal point of accuracy was actually essential for Kepler's results."
Similarly, modern AI systems can generate hypotheses at unprecedented scale, but without robust verification mechanisms, most of these outputs become what Tao calls "slop" — noise that fails to advance knowledge.
The bottleneck isn't generating ideas anymore. It's separating the few that actually matter from the vast quantity of dead ends.
The cost of idea generation has collapsed to near-zero, but verification has become the new bottleneck in scientific progress.
Counterarguments
Critics might argue that human intuition and creativity still cannot be fully replaced by data-driven approaches — that some breakthroughs require conceptual leaps impossible for AI systems. Others might note that verification infrastructure already exists in formal peer review, though it wasn't designed for the volume AI generates.
Tao acknowledges that with more sophisticated AI, we may eventually have millions of "researchers" hunting for empirical regularities across massive datasets. But without robust verification, these efforts risk producing noise rather than genuine scientific advancement.
Bottom Line
The strongest thread running through Tao's analysis is the historical lesson: Kepler didn't succeed through beautiful theories alone — he succeeded because he had Tycho Brahe's data to test them against. The same principle applies to AI: generation is now cheap, but verification remains the essential step that separates real knowledge from intellectual dead ends. The challenge isn't producing more ideas; it's building systems that can actually tell which ones are worth pursuing.