← Back to Library

Let's read! Kenny easwaran, 2011, "bayesianism II: Applications and criticisms"

In a field often paralyzed by the debate between objective truth and subjective belief, Kenny Easwaran offers a surprisingly practical roadmap for navigating uncertainty. Rather than dismissing the subjectivity of human belief as a flaw, he reframes it as a rigorous mathematical tool capable of resolving ancient paradoxes in scientific reasoning. This is not merely a tutorial on probability; it is a defense of how we should update our confidence in the world when the evidence is messy, incomplete, or counterintuitive.

The Mechanics of Belief

Easwaran begins by dismantling the binary view of knowledge—the idea that we either believe something or we don't. Instead, he proposes a spectrum of confidence measured by "degrees of belief." He writes, "Unlike traditional views in epistemology we don't just believe or disbelieve something but instead have something that comes in degrees which we can measure with the mathematics of probability." This shift is crucial because it allows for a more nuanced interaction with new data. As Easwaran explains, "When you gather new evidence you don't so much change what you believe as you update your probabilities." The strength of this approach lies in its ability to model real-world reasoning, where certainty is rarely achieved, but confidence can be incrementally adjusted.

Let's read! Kenny easwaran, 2011, "bayesianism II: Applications and criticisms"

However, the framework is not without its critics. Some argue that anchoring probability in personal confidence makes science too subjective. Easwaran anticipates this, noting that while the starting points (priors) may differ, the mathematical structure of updating ensures that rational agents will converge on similar conclusions given enough shared evidence. He clarifies that "Bayesian probability is not about chances in the world or frequencies of repeatable events but it's rather a subjective and personal thing about how confident someone may or may not be in things they're reasoning about." This distinction is vital: it separates the state of the world from our knowledge of the world, a nuance often lost in public discourse.

The Ambiguity of Confirmation

The commentary takes a sharp turn into the philosophy of science, tackling how evidence actually supports a hypothesis. Easwaran highlights a historical confusion in the field, pointing out that early thinkers like Rudolf Carnap failed to distinguish between two very different concepts of confirmation. He writes, "The ambiguity is between an absolute notion of confirmation as firmness the posterior of the hypothesis after you've learned and an incremental notion of increase in firmness something like the change from the prior to the posterior." This distinction is the piece's intellectual anchor. To illustrate, Easwaran uses a pregnancy test analogy: a negative result might leave one with high "firmness" (confidence) that they are not pregnant, but if they were already 99% sure, the test provided almost no "incremental" support.

"Are you paying attention to how much difference a piece of evidence makes or are you paying attention to how confident you are after receiving the evidence?"

This question exposes a common cognitive trap. We often mistake high confidence for strong evidence. Easwaran argues that true scientific progress is measured by the change in belief, not the final state. He notes that "most Bayesian measures of confirmation have aimed at something more like the latter this incremental notion this increase of [confidence]." The editorial value here is significant; it provides a metric for evaluating claims in everything from medical diagnostics to policy analysis. If a new study doesn't shift our prior beliefs significantly, its practical utility may be overstated, regardless of how confident the researchers sound.

The Raven Paradox and the Limits of Logic

The piece culminates in a discussion of the famous "Paradox of the Ravens," a logical puzzle that has stumped philosophers for decades. The paradox suggests that observing a green apple (a non-black non-raven) should logically confirm the hypothesis that "all ravens are black." Easwaran explains that while this seems absurd, traditional logical theories of confirmation predict it. He writes, "According to his theory we would also discover that the observation of a non-black non-raven like a green apple or a white shoe or a red herring would also confirm the hypothesis." The Bayesian approach, however, offers a resolution by quantifying the degree of confirmation. While a green apple technically confirms the hypothesis, the amount of support is infinitesimally small, effectively rendering the paradox harmless in practice.

Critics might note that this solution relies heavily on the choice of prior probabilities, which can still be subjective. If one's priors are skewed, the mathematical machinery might still yield counterintuitive results. Yet, Easwaran's analysis suggests that the paradox arises more from a misunderstanding of what "confirmation" means in a quantitative sense than from a flaw in logic itself. He concludes that "on their account neither of these confirmation relations necessarily holds" in any meaningful way that would disrupt scientific inquiry.

Bottom Line

Easwaran's analysis succeeds in demystifying the mathematics of belief, transforming abstract probability theory into a practical lens for evaluating evidence. The strongest part of the argument is the clear distinction between "firmness" and "increase in firmness," a concept that should be standard in any discussion of scientific data. The biggest vulnerability remains the reliance on subjective priors, which, while mathematically manageable, can still lead to divergent conclusions among experts with different starting assumptions. For the busy professional, the takeaway is clear: don't just ask if you believe something; ask how much your belief should change given the new information.

Sources

Let's read! Kenny easwaran, 2011, "bayesianism II: Applications and criticisms"

by Kenny Easwaran · Kenny Easwaran · Watch video

this is a video on the second part of my paper on bayesianism in philosophy compass i'll link the video to the first part in the description i'm kenny eastwein and as a reminder although the paper says university of southern california i'm currently at texas a m university i'll link my website in the description so and that'll have updated information whenever you're watching this philosophy compass is a journal aimed at orienting people in a topic in philosophy so there's not much original work in these papers but there's a lot of pointers to things that you can read about in more detail elsewhere as a reminder from the first video the basic idea of bayesianism is that there's a concept of bayesian probability or credence or degree of belief that represents how confident someone is in something unlike traditional views in epistemology we don't just believe or disbelieve something but instead have something that comes in degrees which we can measure with the mathematics of unlike other views of probability bayesian probability is not about chances in the world or frequencies of repeatable events but it's rather a subjective and personal thing about how confident someone may or may not be in things they're reasoning about with uncertainty when you gather new evidence you don't so much change what you believe as you update your probabilities most of the time you'll never reach full certainty one or zero for the interesting claims they'll just move up or down the first part of the paper discussed in the previous video goes more into these basic concepts and the sorts of arguments that philosophers use for why it makes sense to think of these degrees of belief to be thought of as probabilities basically if we think of these degrees of belief as determining how someone acts in light of the uncertainty in the world then these acts would somehow be self-defeating if the degrees of belief weren't like probabilities that is you'd be able to tell that there's a set of acts where you think of each one as favorable but you'd be able to see that if you did all of them then on any possibility you could imagine the net result would be worse than if you had done none of them this is all to say that if you're going to act in light of ...