I agree with the general point that because you predictably expect to update downwards with more information, the risks with the least information will tend to have larger estimates. But:
For a number of risks, when you first hear and think a bit about them, it’s reasonable to have the reaction “Oh, hm, maybe that could be a huge threat to human survival” and initially assign something on the order of a 10% credence to the hypothesis that it will by default lead to existentially bad outcomes.
Really? I feel like there are so many things that could provoke that reaction from me, and I’d expect that for ~99% of them I’d later update to “no, it doesn’t really seem plausible that’s a huge threat to human survival”. If we conservatively say that I’d update to 1% on those risks, and for the other ~1% where I updated upwards I’d update all the way to “definitely going to kill humanity”, then my current probability should be upper bounded by 0.99×0.01+0.01×1.00 or roughly 2%.
It does feel like you need quite a bit more than “hmm, maybe” to get to 10%. (Though note “a lot of people who have thought about it a bunch are still worried” could easily get you there.)
Minor and plausible parameter changes here gets us back to their beliefs.
maybe we can accept that they didn’t encounter or have a different bar for x risks. So maybe it’s 10 risks they consider and 1 of those is AI risk at 100%.
maybe for many of the other risks their valuation is 5-25% (because they have a different value system for what’s bad or leads to lock in).
I agree with the general point that because you predictably expect to update downwards with more information, the risks with the least information will tend to have larger estimates. But:
Really? I feel like there are so many things that could provoke that reaction from me, and I’d expect that for ~99% of them I’d later update to “no, it doesn’t really seem plausible that’s a huge threat to human survival”. If we conservatively say that I’d update to 1% on those risks, and for the other ~1% where I updated upwards I’d update all the way to “definitely going to kill humanity”, then my current probability should be upper bounded by 0.99×0.01+0.01×1.00 or roughly 2%.
It does feel like you need quite a bit more than “hmm, maybe” to get to 10%. (Though note “a lot of people who have thought about it a bunch are still worried” could easily get you there.)
This seems a little ungenerous to the OP.
Minor and plausible parameter changes here gets us back to their beliefs.
maybe we can accept that they didn’t encounter or have a different bar for x risks. So maybe it’s 10 risks they consider and 1 of those is AI risk at 100%.
maybe for many of the other risks their valuation is 5-25% (because they have a different value system for what’s bad or leads to lock in).