Are you just focused on any and all researchers, or researchers in some particular set of fields? Perhaps the fields most relevant to existential risk (e.g., AI, biotech, international relations)?
This āevidence and supportā might take the form of risk estimates from Ord (2020)
I think thereās some value in updating oneās beliefs based on expertsā existential risk estimates, such as those from Ord. But Iād also worry about the possibility of anchoring and/āor information cascades if other researchersāwho might have been able to come up with their own reasonable estimates if they triedāare actively encouraged to adjust their beliefs based on existing estimates. Iād also be wary of heavily relying on any particular estimate or set of estimates, without checking what other experts said about the same topic.
So it might be useful for you to draw on the existential risk estimates in this this database I made, and also to just keep in mind the risks of anchoring and information cascades and try to find ways to mitigate those issues. (Iāll be discussing these topics more in a lightning talk at EAGx, and maybe in an Unconference session too.)
In eliciting beliefs, some or all participants could be incentivised to answer with estimates as close to expert opinion as possible. (This is standard procedure in economics, although usually one is trying to elicit a participantās best guess of something closer to āground truthā.)
At first, I thought you meant telling the researchers what experts thought, and then incentivising the researchers to say the same. I felt unsure what the point of that would be. But now Iām guessing you mean something like telling them theyāll get some incentive if the estimate they come up with is close to expertsā estimates, to encourage them to think hard? If so, whatās the goal from that? I could imagine this leading to researchers giving relatively high estimates because they expect x-risk experts would do so, rather than it leading to researchers thinking really hard about what they themselves should believe.
Finally, it seems possible that the ācrucial questions for longtermistsā project Iām working on might be relevant in some way for your idea. For example, perhaps that could provide some inspiration regarding things to ask the researchers about, and regarding what may be underlying the differences in the strategic views and choices of them vs of the existential risk community.
Sounds interesting. A few questions/āthoughts:
Are you just focused on any and all researchers, or researchers in some particular set of fields? Perhaps the fields most relevant to existential risk (e.g., AI, biotech, international relations)?
I think thereās some value in updating oneās beliefs based on expertsā existential risk estimates, such as those from Ord. But Iād also worry about the possibility of anchoring and/āor information cascades if other researchersāwho might have been able to come up with their own reasonable estimates if they triedāare actively encouraged to adjust their beliefs based on existing estimates. Iād also be wary of heavily relying on any particular estimate or set of estimates, without checking what other experts said about the same topic.
So it might be useful for you to draw on the existential risk estimates in this this database I made, and also to just keep in mind the risks of anchoring and information cascades and try to find ways to mitigate those issues. (Iāll be discussing these topics more in a lightning talk at EAGx, and maybe in an Unconference session too.)
At first, I thought you meant telling the researchers what experts thought, and then incentivising the researchers to say the same. I felt unsure what the point of that would be. But now Iām guessing you mean something like telling them theyāll get some incentive if the estimate they come up with is close to expertsā estimates, to encourage them to think hard? If so, whatās the goal from that? I could imagine this leading to researchers giving relatively high estimates because they expect x-risk experts would do so, rather than it leading to researchers thinking really hard about what they themselves should believe.
Finally, it seems possible that the ācrucial questions for longtermistsā project Iām working on might be relevant in some way for your idea. For example, perhaps that could provide some inspiration regarding things to ask the researchers about, and regarding what may be underlying the differences in the strategic views and choices of them vs of the existential risk community.