This Paradox Splits Smart People 50/50", "author": "Derek Muller", "publication": "Veritasium", "text": "A strange problem has been dividing people for decades. It appears at philosophy conferences, in economics departments, and in casual conversations between friends who suddenly become enemies. The setup sounds simple: you walk into a room and find two boxes on a table. One contains $1,000 — you're certain about that. The other is a mystery box. A supercomputer that has never been wrong tells you it predicted whether you'd take just the mystery box or both boxes before you entered the room. If it predicted you'd choose only the mystery box, it placed $1 million inside. If it predicted you'd grab both boxes, it left the mystery box empty.
This is Newcomb's Paradox — named not for a person but for the mathematician William Newcomb — and it's one of the few problems where professional philosophers genuinely disagree by half.
Your choice reveals something fundamental about how you think about decision-making.
Two camps exist. Those who take only the mystery box argue that since the computer was correct almost every time, there's overwhelming evidence it predicted correctly this time too. Taking just the mystery box means walking away with $1 million. Those who take both boxes argue that whatever choice you make now doesn't change what happened before you entered the room — the money is already either in there or not. Taking both boxes guarantees at least $1,000 and potentially much more.
The first group uses what's called evidential decision theory. They reason that because the computer's accuracy has been proven across thousands of cases, their choice actually influences whether a million dollars sits waiting in the mystery box. The expected utility calculation favors one boxing — meaning taking only the mystery box — when the computer's accuracy exceeds roughly 50%. Given the track record described, that's clearly the case.
The second group uses causal decision theory. They reason that nothing about what happens now can retroactively change what was already predicted. The boxes were set up before you learned about the problem. Your current decision doesn't influence the past. Taking both boxes always gives you at least $1,000 plus whatever might be in the mystery box — making it the dominant strategy regardless of what the computer predicted.
Here's where it gets uncomfortable: if a perfect predictor exists, does that mean free will doesn't exist? If the computer's prediction was already fixed before you made any choice, then nothing you do in the moment changes anything. The future was already determined.
Some philosophers argue this reveals that free will is actually an illusion — but we live as though it's real anyway. Others push back, saying the paradox assumes a perfect predictor which doesn't exist in practice. If such a being existed with 100% accuracy, the debate would be settled. But that's an unrealistic assumption.
The strongest version of this argument comes from philosophers Gibbert and Harper's 1978 paper. They argue the rational choice is to take both boxes — but they admit one boxing actually fares better financially. The game may be rigged in favor of those who ignore logic and just grab everything available.
Both approaches are mathematically sound, which is what makes this so maddening.
The paradox doesn't have a clean answer because it exposes a genuine split in how people understand causation and evidence. Some see their choices as influencing future outcomes. Others see their choices as powerless against predictions already made. Neither group is wrong — they're just applying different decision theories to the same problem.
What Newcomb's Paradox reveals is that rationality isn't a single path. Two perfectly rational people can look at identical information and walk away with opposing answers. The problem isn't that one side is smarter than the other. It's that they fundamentally disagree about whether their decisions can change what's already been predicted.