I think in the real world there are many situations where (if we were to put explicit Bayesian probabilities on such beliefs, which we almost never do), beliefs with ex ante ~0 credence quickly get extraordinary updates. My favorite example is sense perception. If I woke up after sleeping on a bus and were to put explicit Bayesian probabilities on anticipating what I will see next time I open my eyes, then my belief I’d assign in the true outcome (ignoring practical constraints like computation and my near inability to have any visual imagery) has ~0 credence. Yet it’s easy to get strong Bayesian updates: I just open my eyes. In most cases, this should be a large enough update, and I go on my merry way.
But suppose I open my eyes and instead see people who are approximate lookalikes of dead US presidents sitting around the bus. Then at that point (even though the ex ante probability of this outcome and that of a specific other thing I saw isn’t much different), I will correctly be surprised, and have some reasons to doubt my sense perception.
Likewise, if instead of saying your name is Mark Xu, you instead said “Lee Kuan Yew”, I at least would be pretty suspicious that your actual name is Lee Kuan Yew.
Lots of things are a priori extremely unlikely yet we should have high credence in them: for example, the chance that you just dealt this particular (random-seeming) sequence of cards from a well-shuffled deck of 52 cards is 1 in 52! ≈ 1 in 10^68, yet you should often have high credence in claims of that form. But the claim that we’re at an extremely special time is also fishy. That is, it’s more like the claim that you just dealt a deck of cards in perfect order (2 to Ace of clubs, then 2 to Ace of diamonds, etc) from a well-shuffled deck of cards.
Being fishy is different than just being unlikely. The difference between unlikelihood and fishiness is the availability of alternative, not wildly improbable, alternative hypotheses, on which the outcome or evidence is reasonably likely. If I deal the random-seeming sequence of cards, I don’t have reason to question my assumption that the deck was shuffled, because there’s no alternative background assumption on which the random-seeming sequence is a likely occurrence. If, however, I deal the deck of cards in perfect order, I do have reason to significantly update that the deck was not in fact shuffled, because the probability of getting cards in perfect order if the cards were not shuffled is reasonably high. That is: P(cards not shuffled)P(cards in perfect order | cards not shuffled) >> P(cards shuffled)P(cards in perfect order | cards shuffled), even if my prior credence was that P(cards shuffled) > P(cards not shuffled), so I should update towards the cards having not been shuffled.
Put another way, we can dissolve this by looking explicitly at Bayes’ theorem. P(Hypothesis|Evidence)=P(Evidence|Hypothesis)∗P(Hypothesis)P(Evidence)
and in turn, P(Evidence)=P(Evidence|Hypothesis)∗P(Hypothesis)+P(Evidence|OtherHypotheses)∗P(OtherHypotheses)
P(Evidence|Hypothesis) is high in both the “fishy” and “non-fishy” regimes. However,P(Evidence|OtherHypotheses) is much higher for fishy hypotheses than for non-fishy hypotheses, even if the surface-level evidence looks similar!
I think in the real world there are many situations where (if we were to put explicit Bayesian probabilities on such beliefs, which we almost never do), beliefs with ex ante ~0 credence quickly get extraordinary updates. My favorite example is sense perception. If I woke up after sleeping on a bus and were to put explicit Bayesian probabilities on anticipating what I will see next time I open my eyes, then my belief I’d assign in the true outcome (ignoring practical constraints like computation and my near inability to have any visual imagery) has ~0 credence. Yet it’s easy to get strong Bayesian updates: I just open my eyes. In most cases, this should be a large enough update, and I go on my merry way.
But suppose I open my eyes and instead see people who are approximate lookalikes of dead US presidents sitting around the bus. Then at that point (even though the ex ante probability of this outcome and that of a specific other thing I saw isn’t much different), I will correctly be surprised, and have some reasons to doubt my sense perception.
Likewise, if instead of saying your name is Mark Xu, you instead said “Lee Kuan Yew”, I at least would be pretty suspicious that your actual name is Lee Kuan Yew.
I think a lot of this confusion in intuitions can be resolved by looking at what MacAskill calls the difference between unlikelihood and fishiness:
Put another way, we can dissolve this by looking explicitly at Bayes’ theorem. P(Hypothesis|Evidence)=P(Evidence|Hypothesis)∗P(Hypothesis)P(Evidence)
and in turn, P(Evidence)=P(Evidence|Hypothesis)∗P(Hypothesis)+P(Evidence|OtherHypotheses)∗P(OtherHypotheses)
P(Evidence|Hypothesis) is high in both the “fishy” and “non-fishy” regimes. However,P(Evidence|OtherHypotheses) is much higher for fishy hypotheses than for non-fishy hypotheses, even if the surface-level evidence looks similar!