Great post! I like your â1 in a millionâ threshold as a heuristic, or perhaps sufficient condition for being non-Pascalian. But I think that arbitrarily lower probabilities could also be non-Pascalian, so long as they are sufficiently âobjectiveâ or robustly grounded.
Quick argument for this conclusion: just imagine scaling up the voting example. It seems worth voting in any election that significantly affects N people, where your chance of making a (positive) difference is inversely proportional (say within an order of magnitude of 1/âN, or better). So long as scale and probability remain approximately inversely proportional, it doesnât seem to make a difference to the choice-worthiness of voting what the precise value of N is here.
Crucially, there are well-understood mechanisms and models that ground these probability assignments. Weâre not just making numbers up, or offering a purely subjective credence. Asteroid impacts seem similar. We might have robust statistical models, based on extensive astronomical observation, that allow us to assign a 1/âtrillion chance of averting extinction through some new asteroid-tracking program, in which case it seems to me that we should clearly take those expected value calculations at face value and act accordingly. However tiny the probabilities may be, if they are well-grounded, theyâre not âPascalianâ.
Pascalian probabilities are instead (I propose) ones that lack robust epistemic support. Theyâre more or less made up, and could easily be âoffâ by many, many orders of magnitude. Per Holden Karnofskyâs argument in âWhy we canât take explicit expected value estimates literallyâ, Bayesian adjustments would plausibly mandate massively discounting these non-robust initial estimates (roughly in proportion to their claims to massive impact), leading to low adjusted expected value after all.
I like the previous paragraph as a quick solution to âPascalâs muggingâ. But even if you donât think it works, I think this distinction between robust vs non-robustly grounded probability estimates may serve to distinguish intuitively non-Pascalian vs Pascalian tiny-probability gambles.
Conclusion: small probabilities are not Pascalian if they are either (i) not ridiculously tiny, or (ii) robustly grounded in evidence.
People generally find it difficult to judge the size of these kinds of small probabilities that lack robust epistemic support. That means that they could be susceptible to conmen telling them stories of potential events which, though unlikely (according to the listenerâs estimate), has a substantial expected value due to huge payoffs were they to occur (akin to Pascalâs mugging). It may be that people have developed defence mechanism against this, and reject claims of large expected value involving non-robust probabilities to avoid extortion. I once had plans to study this psychological hypothesis empirically, but abandoned them.
As you (and other commenters) note, another aspect of Pascalian probabilities is their subjectivity/âambiguity. Even if you canât (accurately) generate âwhat is the probability I get hit by a car if I run across this road now?â, you have ânumbers you can stand somewhat nearâ to gauge the riskâor at least âthis has happened beforeâ case studies (cf. asteroids). Although you can motivate more longtermist issues via similar means (e.g. âWell, weâve seen pandemics at least this bad beforeâ, âWhatâs the chance folks raising grave concern about an emerging technology prove to be right?â) you typically have less to go on and are reaching further from it.
I think we share similar intuitions: this is a reasonable consideration, but it seems better to account for it quantitatively (e.g. with a sceptical prior or discount for âdistance from solid epistemic groundâ) rather than a qualitative heuristic. E.g. it seems reasonable to discount AI risk estimates (potentially by orders of magnitude) if it all seems very outlandish to youâbut then you should treat these âall things consideredâ estimates at face value.
Great post! I like your â1 in a millionâ threshold as a heuristic, or perhaps sufficient condition for being non-Pascalian. But I think that arbitrarily lower probabilities could also be non-Pascalian, so long as they are sufficiently âobjectiveâ or robustly grounded.
Quick argument for this conclusion: just imagine scaling up the voting example. It seems worth voting in any election that significantly affects N people, where your chance of making a (positive) difference is inversely proportional (say within an order of magnitude of 1/âN, or better). So long as scale and probability remain approximately inversely proportional, it doesnât seem to make a difference to the choice-worthiness of voting what the precise value of N is here.
Crucially, there are well-understood mechanisms and models that ground these probability assignments. Weâre not just making numbers up, or offering a purely subjective credence. Asteroid impacts seem similar. We might have robust statistical models, based on extensive astronomical observation, that allow us to assign a 1/âtrillion chance of averting extinction through some new asteroid-tracking program, in which case it seems to me that we should clearly take those expected value calculations at face value and act accordingly. However tiny the probabilities may be, if they are well-grounded, theyâre not âPascalianâ.
Pascalian probabilities are instead (I propose) ones that lack robust epistemic support. Theyâre more or less made up, and could easily be âoffâ by many, many orders of magnitude. Per Holden Karnofskyâs argument in âWhy we canât take explicit expected value estimates literallyâ, Bayesian adjustments would plausibly mandate massively discounting these non-robust initial estimates (roughly in proportion to their claims to massive impact), leading to low adjusted expected value after all.
I like the previous paragraph as a quick solution to âPascalâs muggingâ. But even if you donât think it works, I think this distinction between robust vs non-robustly grounded probability estimates may serve to distinguish intuitively non-Pascalian vs Pascalian tiny-probability gambles.
Conclusion: small probabilities are not Pascalian if they are either (i) not ridiculously tiny, or (ii) robustly grounded in evidence.
I agree with this.
People generally find it difficult to judge the size of these kinds of small probabilities that lack robust epistemic support. That means that they could be susceptible to conmen telling them stories of potential events which, though unlikely (according to the listenerâs estimate), has a substantial expected value due to huge payoffs were they to occur (akin to Pascalâs mugging). It may be that people have developed defence mechanism against this, and reject claims of large expected value involving non-robust probabilities to avoid extortion. I once had plans to study this psychological hypothesis empirically, but abandoned them.
Thanks for this, Richard.
As you (and other commenters) note, another aspect of Pascalian probabilities is their subjectivity/âambiguity. Even if you canât (accurately) generate âwhat is the probability I get hit by a car if I run across this road now?â, you have ânumbers you can stand somewhat nearâ to gauge the riskâor at least âthis has happened beforeâ case studies (cf. asteroids). Although you can motivate more longtermist issues via similar means (e.g. âWell, weâve seen pandemics at least this bad beforeâ, âWhatâs the chance folks raising grave concern about an emerging technology prove to be right?â) you typically have less to go on and are reaching further from it.
I think we share similar intuitions: this is a reasonable consideration, but it seems better to account for it quantitatively (e.g. with a sceptical prior or discount for âdistance from solid epistemic groundâ) rather than a qualitative heuristic. E.g. it seems reasonable to discount AI risk estimates (potentially by orders of magnitude) if it all seems very outlandish to youâbut then you should treat these âall things consideredâ estimates at face value.