As you (and other commenters) note, another aspect of Pascalian probabilities is their subjectivity/âambiguity. Even if you canât (accurately) generate âwhat is the probability I get hit by a car if I run across this road now?â, you have ânumbers you can stand somewhat nearâ to gauge the riskâor at least âthis has happened beforeâ case studies (cf. asteroids). Although you can motivate more longtermist issues via similar means (e.g. âWell, weâve seen pandemics at least this bad beforeâ, âWhatâs the chance folks raising grave concern about an emerging technology prove to be right?â) you typically have less to go on and are reaching further from it.
I think we share similar intuitions: this is a reasonable consideration, but it seems better to account for it quantitatively (e.g. with a sceptical prior or discount for âdistance from solid epistemic groundâ) rather than a qualitative heuristic. E.g. it seems reasonable to discount AI risk estimates (potentially by orders of magnitude) if it all seems very outlandish to youâbut then you should treat these âall things consideredâ estimates at face value.
Thanks for this, Richard.
As you (and other commenters) note, another aspect of Pascalian probabilities is their subjectivity/âambiguity. Even if you canât (accurately) generate âwhat is the probability I get hit by a car if I run across this road now?â, you have ânumbers you can stand somewhat nearâ to gauge the riskâor at least âthis has happened beforeâ case studies (cf. asteroids). Although you can motivate more longtermist issues via similar means (e.g. âWell, weâve seen pandemics at least this bad beforeâ, âWhatâs the chance folks raising grave concern about an emerging technology prove to be right?â) you typically have less to go on and are reaching further from it.
I think we share similar intuitions: this is a reasonable consideration, but it seems better to account for it quantitatively (e.g. with a sceptical prior or discount for âdistance from solid epistemic groundâ) rather than a qualitative heuristic. E.g. it seems reasonable to discount AI risk estimates (potentially by orders of magnitude) if it all seems very outlandish to youâbut then you should treat these âall things consideredâ estimates at face value.