In a field often paralyzed by the debate between objective truth and subjective belief, Kenny Easwaran offers a surprisingly practical roadmap for navigating uncertainty. Rather than dismissing the subjectivity of human belief as a flaw, he reframes it as a rigorous mathematical tool capable of resolving ancient paradoxes in scientific reasoning. This is not merely a tutorial on probability; it is a defense of how we should update our confidence in the world when the evidence is messy, incomplete, or counterintuitive.
The Mechanics of Belief
Easwaran begins by dismantling the binary view of knowledge—the idea that we either believe something or we don't. Instead, he proposes a spectrum of confidence measured by "degrees of belief." He writes, "Unlike traditional views in epistemology we don't just believe or disbelieve something but instead have something that comes in degrees which we can measure with the mathematics of probability." This shift is crucial because it allows for a more nuanced interaction with new data. As Easwaran explains, "When you gather new evidence you don't so much change what you believe as you update your probabilities." The strength of this approach lies in its ability to model real-world reasoning, where certainty is rarely achieved, but confidence can be incrementally adjusted.
However, the framework is not without its critics. Some argue that anchoring probability in personal confidence makes science too subjective. Easwaran anticipates this, noting that while the starting points (priors) may differ, the mathematical structure of updating ensures that rational agents will converge on similar conclusions given enough shared evidence. He clarifies that "Bayesian probability is not about chances in the world or frequencies of repeatable events but it's rather a subjective and personal thing about how confident someone may or may not be in things they're reasoning about." This distinction is vital: it separates the state of the world from our knowledge of the world, a nuance often lost in public discourse.
The Ambiguity of Confirmation
The commentary takes a sharp turn into the philosophy of science, tackling how evidence actually supports a hypothesis. Easwaran highlights a historical confusion in the field, pointing out that early thinkers like Rudolf Carnap failed to distinguish between two very different concepts of confirmation. He writes, "The ambiguity is between an absolute notion of confirmation as firmness the posterior of the hypothesis after you've learned and an incremental notion of increase in firmness something like the change from the prior to the posterior." This distinction is the piece's intellectual anchor. To illustrate, Easwaran uses a pregnancy test analogy: a negative result might leave one with high "firmness" (confidence) that they are not pregnant, but if they were already 99% sure, the test provided almost no "incremental" support.
"Are you paying attention to how much difference a piece of evidence makes or are you paying attention to how confident you are after receiving the evidence?"
This question exposes a common cognitive trap. We often mistake high confidence for strong evidence. Easwaran argues that true scientific progress is measured by the change in belief, not the final state. He notes that "most Bayesian measures of confirmation have aimed at something more like the latter this incremental notion this increase of [confidence]." The editorial value here is significant; it provides a metric for evaluating claims in everything from medical diagnostics to policy analysis. If a new study doesn't shift our prior beliefs significantly, its practical utility may be overstated, regardless of how confident the researchers sound.
The Raven Paradox and the Limits of Logic
The piece culminates in a discussion of the famous "Paradox of the Ravens," a logical puzzle that has stumped philosophers for decades. The paradox suggests that observing a green apple (a non-black non-raven) should logically confirm the hypothesis that "all ravens are black." Easwaran explains that while this seems absurd, traditional logical theories of confirmation predict it. He writes, "According to his theory we would also discover that the observation of a non-black non-raven like a green apple or a white shoe or a red herring would also confirm the hypothesis." The Bayesian approach, however, offers a resolution by quantifying the degree of confirmation. While a green apple technically confirms the hypothesis, the amount of support is infinitesimally small, effectively rendering the paradox harmless in practice.
Critics might note that this solution relies heavily on the choice of prior probabilities, which can still be subjective. If one's priors are skewed, the mathematical machinery might still yield counterintuitive results. Yet, Easwaran's analysis suggests that the paradox arises more from a misunderstanding of what "confirmation" means in a quantitative sense than from a flaw in logic itself. He concludes that "on their account neither of these confirmation relations necessarily holds" in any meaningful way that would disrupt scientific inquiry.
Bottom Line
Easwaran's analysis succeeds in demystifying the mathematics of belief, transforming abstract probability theory into a practical lens for evaluating evidence. The strongest part of the argument is the clear distinction between "firmness" and "increase in firmness," a concept that should be standard in any discussion of scientific data. The biggest vulnerability remains the reliance on subjective priors, which, while mathematically manageable, can still lead to divergent conclusions among experts with different starting assumptions. For the busy professional, the takeaway is clear: don't just ask if you believe something; ask how much your belief should change given the new information.