Richard Chappell writes something similar here, better than I could. Thanks Lizka for linking to that post!
Pascalian probabilities are instead (I propose) ones that lack robust epistemic support. They’re more or less made up, and could easily be “off” by many, many orders of magnitude. Per Holden Karnofsky’s argument in ‘Why we can’t take explicit expected value estimates literally’, Bayesian adjustments would plausibly mandate massively discounting these non-robust initial estimates (roughly in proportion to their claims to massive impact), leading to low adjusted expected value after all.
Maybe I should have titled this post differently, for example “Beware of non-robust probability estimates multiplied by large numbers”.
Richard Chappell writes something similar here, better than I could. Thanks Lizka for linking to that post!
Maybe I should have titled this post differently, for example “Beware of non-robust probability estimates multiplied by large numbers”.