I don’t think there are any other sources you’re missing—at least, if you’re missing them, I’m missing them too (and I work at FHI). I guess my overall feeling is these estimates are hard to make and necessarily imprecise: long-run large scale estimates (e.g. what was the likelihood of a nuclear exchange between the US and Russia between 1960 and 1970?) are still very hard to make ex post, leave alone ex ante.
One question might be how important further VoI is for particular questions. I guess the overall ‘x risk chance’ may have surprisingly small action relevance. The considerations about the relative importance of x-risk reduction seem to be fairly insensitive to 10^-1 or 10^-5 (at more extreme values, you might start having pascalian worries), and instead the discussion hinges on issues like tractability, pop ethics, etc.
Risk share seems more important (e.g. how much more worrying is AI than nuclear war?), yet these comparative judgements can be generally made in relative terms, without having to cash out the absolute values.
The considerations about the relative importance of x-risk reduction seem to be fairly insensitive to 10^-1 or 10^-5 (at more extreme values, you might start having pascalian worries), and instead the discussion hinges on issues like tractability, pop ethics, etc.
I think differences over that range matter a lot, both within a long-termist perspective and over a pluralist distribution across perspectives.
At the high end of that range the low-hanging fruit of x-risk reduction will also be very effective at saving the lives of already existing humans, making them less dependent on concern for future generations.
At the low end, non-existential risk trajectory changes look more important within long-termist frame, or capacity building for later challenges.
Magnitude of risk also importantly goes into processes for allocating effort under moral uncertainty and moral pluralism.
Thanks for posting this.
I don’t think there are any other sources you’re missing—at least, if you’re missing them, I’m missing them too (and I work at FHI). I guess my overall feeling is these estimates are hard to make and necessarily imprecise: long-run large scale estimates (e.g. what was the likelihood of a nuclear exchange between the US and Russia between 1960 and 1970?) are still very hard to make ex post, leave alone ex ante.
One question might be how important further VoI is for particular questions. I guess the overall ‘x risk chance’ may have surprisingly small action relevance. The considerations about the relative importance of x-risk reduction seem to be fairly insensitive to 10^-1 or 10^-5 (at more extreme values, you might start having pascalian worries), and instead the discussion hinges on issues like tractability, pop ethics, etc.
Risk share seems more important (e.g. how much more worrying is AI than nuclear war?), yet these comparative judgements can be generally made in relative terms, without having to cash out the absolute values.
I think differences over that range matter a lot, both within a long-termist perspective and over a pluralist distribution across perspectives.
At the high end of that range the low-hanging fruit of x-risk reduction will also be very effective at saving the lives of already existing humans, making them less dependent on concern for future generations.
At the low end, non-existential risk trajectory changes look more important within long-termist frame, or capacity building for later challenges.
Magnitude of risk also importantly goes into processes for allocating effort under moral uncertainty and moral pluralism.