I especially appreciated this quote, which helped me verbalize one of my own intuitions:
It would be naive to say something like this: “Even if the probability of extinction is one in a billion and the probability of preventing it is one in a trillion, we should still prioritize x-risk reduction over everything else…” The reason it would be naive is that tiny probabilities of vast utilities pose a serious unsolved problem, and there are various potential solutions that would undermine any project with probabilities so low.
On another thread, I laid out my own approach to thinking about Pascal’s Mugging: Essentially, I start to penalize or even ignore probabilities of averting extinction once they become so unlikely that humanity can’t “buy” enough of them to have a reasonable chance of survival (definitions of “reasonable” may vary).
Another way to think about this: I don’t think we’re very good at determining the difference between “one in a trillion” and “one in a hundred trillion” when those kinds of determinations require speculative models (rather than using well-known physical laws). And the smaller the odds, the worse our predictions become. If a mugger tells me I’ve got a 1/10^10^10 chance of saving his universe, I can’t think of any way to verify whether that number is accurate; it might as well be 1/10^10^10^10.
This is something like “reject open-mindedness”, but it’s not so much a rejection as an avoidance, fueled by confusion about very small numbers and doubt that humans will be able to distinguish probabilities like that prior to the advent of an AGI powerful enough to make our philosophy seem meaningless. (Fortunately for me, I think that my best giving opportunities are likely enough to prevent extinction that I don’t need to do mugging math.)
It’s plausible that I’ve just repeated something from the original Less Wrong discussion about this, and I’d love to hear better ideas from anyone on how to stop avoiding open-mindedness.
I especially appreciated this quote, which helped me verbalize one of my own intuitions:
On another thread, I laid out my own approach to thinking about Pascal’s Mugging: Essentially, I start to penalize or even ignore probabilities of averting extinction once they become so unlikely that humanity can’t “buy” enough of them to have a reasonable chance of survival (definitions of “reasonable” may vary).
Another way to think about this: I don’t think we’re very good at determining the difference between “one in a trillion” and “one in a hundred trillion” when those kinds of determinations require speculative models (rather than using well-known physical laws). And the smaller the odds, the worse our predictions become. If a mugger tells me I’ve got a 1/10^10^10 chance of saving his universe, I can’t think of any way to verify whether that number is accurate; it might as well be 1/10^10^10^10.
This is something like “reject open-mindedness”, but it’s not so much a rejection as an avoidance, fueled by confusion about very small numbers and doubt that humans will be able to distinguish probabilities like that prior to the advent of an AGI powerful enough to make our philosophy seem meaningless. (Fortunately for me, I think that my best giving opportunities are likely enough to prevent extinction that I don’t need to do mugging math.)
It’s plausible that I’ve just repeated something from the original Less Wrong discussion about this, and I’d love to hear better ideas from anyone on how to stop avoiding open-mindedness.