An issue with this, of course, is that if you think a disaster is likely to be existential (or even just kill you), you don’t have an incentive to predict it. This seems helpful for the sort of dynamic you laid out on top where a bunch of different issues can cause catastrophe but any one of them might not be existential on its own, but if a threat is sufficiently catastrophic this might not work.
Precisely! So you get people to predict smaller catastrophes and proxies for increasing risk level instead. Formulating the right questions and putting the answers together to estimate x-risk are the challenges.
An issue with this, of course, is that if you think a disaster is likely to be existential (or even just kill you), you don’t have an incentive to predict it. This seems helpful for the sort of dynamic you laid out on top where a bunch of different issues can cause catastrophe but any one of them might not be existential on its own, but if a threat is sufficiently catastrophic this might not work.
Precisely! So you get people to predict smaller catastrophes and proxies for increasing risk level instead. Formulating the right questions and putting the answers together to estimate x-risk are the challenges.