Great post! I like your ‘1 in a million’ threshold as a heuristic, or perhaps sufficient condition for being non-Pascalian. But I think that arbitrarily lower probabilities could also be non-Pascalian, so long as they are sufficiently “objective” or robustly grounded.
Quick argument for this conclusion: just imagine scaling up the voting example. It seems worth voting in any election that significantly affects N people, where your chance of making a (positive) difference is inversely proportional (say within an order of magnitude of 1/N, or better). So long as scale and probability remain approximately inversely proportional, it doesn’t seem to make a difference to the choice-worthiness of voting what the precise value of N is here.
Crucially, there are well-understood mechanisms and models that ground these probability assignments. We’re not just making numbers up, or offering a purely subjective credence. Asteroid impacts seem similar. We might have robust statistical models, based on extensive astronomical observation, that allow us to assign a 1/trillion chance of averting extinction through some new asteroid-tracking program, in which case it seems to me that we should clearly take those expected value calculations at face value and act accordingly. However tiny the probabilities may be, if they are well-grounded, they’re not “Pascalian”.
Pascalian probabilities are instead (I propose) ones that lack robust epistemic support. They’re more or less made up, and could easily be “off” by many, many orders of magnitude. Per Holden Karnofsky’s argument in ‘Why we can’t take explicit expected value estimates literally’, Bayesian adjustments would plausibly mandate massively discounting these non-robust initial estimates (roughly in proportion to their claims to massive impact), leading to low adjusted expected value after all.
I like the previous paragraph as a quick solution to “Pascal’s mugging”. But even if you don’t think it works, I think this distinction between robust vs non-robustly grounded probability estimates may serve to distinguish intuitively non-Pascalian vs Pascalian tiny-probability gambles.
Conclusion: small probabilities are not Pascalian if they are either (i) not ridiculously tiny, or (ii) robustly grounded in evidence.
People generally find it difficult to judge the size of these kinds of small probabilities that lack robust epistemic support. That means that they could be susceptible to conmen telling them stories of potential events which, though unlikely (according to the listener’s estimate), has a substantial expected value due to huge payoffs were they to occur (akin to Pascal’s mugging). It may be that people have developed defence mechanism against this, and reject claims of large expected value involving non-robust probabilities to avoid extortion. I once had plans to study this psychological hypothesis empirically, but abandoned them.
As you (and other commenters) note, another aspect of Pascalian probabilities is their subjectivity/ambiguity. Even if you can’t (accurately) generate “what is the probability I get hit by a car if I run across this road now?”, you have “numbers you can stand somewhat near” to gauge the risk—or at least ‘this has happened before’ case studies (cf. asteroids). Although you can motivate more longtermist issues via similar means (e.g. “Well, we’ve seen pandemics at least this bad before”, “What’s the chance folks raising grave concern about an emerging technology prove to be right?”) you typically have less to go on and are reaching further from it.
I think we share similar intuitions: this is a reasonable consideration, but it seems better to account for it quantitatively (e.g. with a sceptical prior or discount for ‘distance from solid epistemic ground’) rather than a qualitative heuristic. E.g. it seems reasonable to discount AI risk estimates (potentially by orders of magnitude) if it all seems very outlandish to you—but then you should treat these ‘all things considered’ estimates at face value.
Great post! I like your ‘1 in a million’ threshold as a heuristic, or perhaps sufficient condition for being non-Pascalian. But I think that arbitrarily lower probabilities could also be non-Pascalian, so long as they are sufficiently “objective” or robustly grounded.
Quick argument for this conclusion: just imagine scaling up the voting example. It seems worth voting in any election that significantly affects N people, where your chance of making a (positive) difference is inversely proportional (say within an order of magnitude of 1/N, or better). So long as scale and probability remain approximately inversely proportional, it doesn’t seem to make a difference to the choice-worthiness of voting what the precise value of N is here.
Crucially, there are well-understood mechanisms and models that ground these probability assignments. We’re not just making numbers up, or offering a purely subjective credence. Asteroid impacts seem similar. We might have robust statistical models, based on extensive astronomical observation, that allow us to assign a 1/trillion chance of averting extinction through some new asteroid-tracking program, in which case it seems to me that we should clearly take those expected value calculations at face value and act accordingly. However tiny the probabilities may be, if they are well-grounded, they’re not “Pascalian”.
Pascalian probabilities are instead (I propose) ones that lack robust epistemic support. They’re more or less made up, and could easily be “off” by many, many orders of magnitude. Per Holden Karnofsky’s argument in ‘Why we can’t take explicit expected value estimates literally’, Bayesian adjustments would plausibly mandate massively discounting these non-robust initial estimates (roughly in proportion to their claims to massive impact), leading to low adjusted expected value after all.
I like the previous paragraph as a quick solution to “Pascal’s mugging”. But even if you don’t think it works, I think this distinction between robust vs non-robustly grounded probability estimates may serve to distinguish intuitively non-Pascalian vs Pascalian tiny-probability gambles.
Conclusion: small probabilities are not Pascalian if they are either (i) not ridiculously tiny, or (ii) robustly grounded in evidence.
I agree with this.
People generally find it difficult to judge the size of these kinds of small probabilities that lack robust epistemic support. That means that they could be susceptible to conmen telling them stories of potential events which, though unlikely (according to the listener’s estimate), has a substantial expected value due to huge payoffs were they to occur (akin to Pascal’s mugging). It may be that people have developed defence mechanism against this, and reject claims of large expected value involving non-robust probabilities to avoid extortion. I once had plans to study this psychological hypothesis empirically, but abandoned them.
Thanks for this, Richard.
As you (and other commenters) note, another aspect of Pascalian probabilities is their subjectivity/ambiguity. Even if you can’t (accurately) generate “what is the probability I get hit by a car if I run across this road now?”, you have “numbers you can stand somewhat near” to gauge the risk—or at least ‘this has happened before’ case studies (cf. asteroids). Although you can motivate more longtermist issues via similar means (e.g. “Well, we’ve seen pandemics at least this bad before”, “What’s the chance folks raising grave concern about an emerging technology prove to be right?”) you typically have less to go on and are reaching further from it.
I think we share similar intuitions: this is a reasonable consideration, but it seems better to account for it quantitatively (e.g. with a sceptical prior or discount for ‘distance from solid epistemic ground’) rather than a qualitative heuristic. E.g. it seems reasonable to discount AI risk estimates (potentially by orders of magnitude) if it all seems very outlandish to you—but then you should treat these ‘all things considered’ estimates at face value.