← Back to Library

Fact checking moravec's paradox

In an era saturated with apocalyptic predictions about artificial intelligence, Arvind Narayanan & Sayash Kapoor deliver a jarringly grounded reality check: the most famous rule of thumb in the field is likely a myth. Their central claim—that Moravec's paradox has never been empirically tested and serves more as a reflection of research bias than a law of nature—strips away the comforting certainty that experts use to predict which jobs will vanish and which will remain safe. This is not just a technical correction; it is a necessary intervention against the panic and complacency that drive policy today.

The Illusion of Predictive Power

The authors begin by dismantling the foundational belief that tasks difficult for humans are easy for machines, and vice versa. They argue that this "paradox" is not a discovered law of physics but a selection effect created by what researchers choose to study. "Moravec's paradox has never been fact checked," they write, noting that despite its ubiquity in TED talks and academic circles, the evidence is entirely absent. This is a bold assertion, yet it lands with force because it exposes a circular logic: the field only studies problems where the difficulty gap is visible, ignoring the thousands of tasks that are either trivial for both or impossible for both.

Fact checking moravec's paradox

Narayanan & Sayash Kapoor explain that the apparent correlation disappears when you consider the full spectrum of human and machine capability. "When you're thinking about the space of all possible tasks, if you basically ignore two quadrants of your 2x2 matrix because they are not interesting, then of course it will seem like what you're left with shows a strong negative correlation between the two axes." This reframing is crucial. It suggests that our anxiety about AI replacing white-collar workers, or our false sense of security regarding blue-collar robotics, is based on a dataset that was curated to fit a narrative, not to reflect reality.

Critics might argue that the paradox holds up as a useful heuristic for current engineering constraints, even if it lacks rigorous statistical backing. However, the authors counter that relying on heuristics in a field as volatile as AI is dangerous. The history of the field is littered with failed predictions, from the belief that symbolic logic would solve all reasoning problems to the assumption that computer vision would remain a hard problem for decades. As they note, "AI researchers have a history of making stuff up about human brains without any relevant background in neuroscience or evolutionary biology."

"Moravec's paradox is really a statement about what the AI community finds it worthwhile to work on. It doesn't have any predictive power about which problems are going to be easy or hard for AI."

The Flawed Evolutionary Story

The piece then turns its gaze to the evolutionary justification often cited to support the paradox. Hans Moravec, the robotics pioneer who coined the term, argued that sensorimotor skills are hard for AI because they are the result of a billion years of evolution, while abstract reasoning is a "thin veneer" of recent human development. Narayanan & Sayash Kapoor find this explanation "highly dubious," pointing out that modern AI has moved far beyond the symbolic systems of the 1970s that Moravec praised.

The authors highlight a critical disconnect: symbolic reasoning works in closed domains like chess but fails catastrophically in open-ended real-world scenarios. "Reasoning in open-ended settings requires common-sense knowledge," they write, yet common sense is precisely the area where the paradox claims AI should struggle. This contradiction reveals the fragility of the theory. The belief that reasoning is a distinct, easily automatable skill has led to a dangerous overestimation of AI's current capabilities in fields like law and medicine. "Maybe the limits to reasoning are actually things like the lack of verifiers," they speculate, suggesting that the bottleneck isn't biological but structural—the inability of AI to get immediate, accurate feedback in complex human systems.

This section is particularly effective because it challenges the "superintelligence" narrative. If reasoning isn't a separate, easy skill to automate, then the fear that AI will soon outperform humans in running governments or conducting scientific research is premature. The authors warn that this misconception leads to "extreme policies, such as investment in AI science at the expense of human scientists, and warning policymakers to prepare for a white collar bloodbath." While some might argue that the pace of recent breakthroughs invalidates historical caution, the authors remind us that "breakthrough technologies take a long time to be successfully commercialized and deployed."

False Comfort and the Robotics Trap

On the flip side, the authors argue that Moravec's paradox provides "false comfort" regarding robotics. The belief that physical tasks are too hard for AI to master anytime soon has lulled society into a false sense of security about the future of manual labor. "Just as people say we have to worry about job losses and safety implications of breakthroughs in reasoning, they'll say we don't have to worry about job losses and safety risks of breakthroughs in robotics," they observe. This is a dangerous gamble. History shows that "hard" problems can be solved overnight if the right infrastructure appears, as was the case with computer vision in 2012 when deep learning and GPUs converged.

The authors draw a parallel to the electric power revolution, which took forty years to replace steam power in factories. "Even breakthrough technologies may especially take a long time to be deployed, because the supporting infrastructure just isn't there." This is a vital insight for busy leaders who are tempted to make binary decisions based on capability timelines. The delay isn't in the intelligence of the machine, but in the integration of that intelligence into the physical world. By focusing on the wrong metric—difficulty for humans rather than readiness for deployment—the administration and industry leaders risk being blindsided.

"We don't really have principles that describe which kinds of tasks are easy for AI and which ones are hard. Well, we have one — Moravec's paradox. It refers to the observation that it's easy to train computers to do things that people find hard... But here's the thing — Moravec's paradox has never been fact checked."

Bottom Line

The strongest part of this argument is its exposure of the selection bias that underpins the entire AI industry's roadmap; it forces a reckoning with the fact that our predictions are often just reflections of our own research priorities. The biggest vulnerability, however, is that debunking the paradox doesn't offer a new, easy formula for prediction, leaving leaders with the uncomfortable truth that the future is genuinely uncertain. The takeaway is clear: stop trying to predict the impossible and start preparing for the inevitable diffusion of technology, because the time to adapt is always sooner than we think.

Sources

Fact checking moravec's paradox

by Arvind Narayanan & Sayash Kapoor · AI Snake Oil · Read full article

I have launched a YouTube channel in which I analyze AI developments from a normal technology perspective. This essay is based on my most recent video in which I did a deep dive into Moravec’s paradox, the endlessly repeated aphorism that tasks that are hard for humans are easy for AI and vice versa.

Here’s what I found:

Moravec’s paradox never been empirically tested. (It’s often repeated as a fact by many AI researchers, including pioneers I know and respect, but that doesn’t mean I’ll take their claims at face value!)

It is really a statement about what the AI community finds it worthwhile to work on. It doesn’t have any predictive power about which problems are going to be easy or hard for AI.

It comes with an evolutionary explanation that I find highly dubious. (AI researchers have a history of making stuff up about human brains without any relevant background in neuroscience or evolutionary biology.)

Moravec’s-paradox-style thinking has led to both alarmism (about imminent superintelligent reasoning) and false comfort (in areas like robotics).

To adapt to AI advances, we don’t need to predict capability breakthroughs. Since diffusion of new capabilities takes a long time, that gives us plenty of time to react — time that we often squander, and then panic!

Watch the full argument here or read it below.

Every week brings new claims about AI advances. How do we know what’s coming next? Could AI predict crime? Write award-winning novels? Hack into critical infrastructure? Will we finally have robots in our home that will fold our clothes and load our dishwashers?

What will AI advances mean for your job? What will it mean for the social fabric? It’s hard to deal with all this uncertainty. If only we had a way to predict which new AI capabilities will be developed soon and which ones will remain hard for the foreseeable future.

Historically, AI researchers’ predictions about progress in AI abilities have been pretty bad. We don’t really have principles that describe which kinds of tasks are easy for AI and which ones are hard.

Well, we have one — Moravec’s paradox. It refers to the observation that it’s easy to train computers to do things that people find hard, like math and logic, and hard to train them to do things that we find easy, like seeing the world or walking.

It comes from the 1988 ...