Sure, even a 0.15% probability by itself seems scary, though it might be low enough that you start wondering about trade-offs with delaying technological progress.
Apart from that, I would be interested how people with much higher P(doom) than that reconcile their belief with these numbers? Are there good reasons to believe that these numbers are not representative of the actual beliefs of superforecasters? Or that superforecasters are somehow systematically wrong or untrustworthy on this issue?
0.15% by 2100 seems pretty scary (would probably suggest spending more resources on it then we currently do).
Sure, even a 0.15% probability by itself seems scary, though it might be low enough that you start wondering about trade-offs with delaying technological progress.
Apart from that, I would be interested how people with much higher P(doom) than that reconcile their belief with these numbers? Are there good reasons to believe that these numbers are not representative of the actual beliefs of superforecasters? Or that superforecasters are somehow systematically wrong or untrustworthy on this issue?