Note that no one should quote the above map out of context and call it “The likely future” or something like that, because some of the scenarios I listed may be highly improbable and because the whole map is drawn with a focus on things that could go wrong. If we wanted a map that also tracked outcomes with astronomical amounts of happiness, there would in addition be many nodes for things like “happy subroutines,” “mindcrime-opposite,” “superhappiness-enabling technologies,” or “unaligned AI trades with aligned AI and does good things after all.” There can be futures in which several s-risk scenarios come to pass at the same time, as well as futures that contain s-risk scenarios but also a lot of happiness (this seems pretty likely).
I like this map! Do you know of anything that attempts to assign probabilities (even very vague/ballpark) to these different outcomes?
As someone who is not particularly “downside-focused,” one thing I find difficult in evaluating the importance of prioritising s-risks vs extinction risks (and then different interventions that could be used to address them) is just not being able to get my head around which sorts of outcomes seem most likely. Given my lack of knowledge about the different risk factors, I mostly just treat each of the different possible outcomes on your map and the hypothetical “map that also tracked outcomes with astronomical amounts of happiness” as being roughly equal in probability.
Sorry for the delayed answer; I had this open but forgot.
I like this map! Do you know of anything that attempts to assign probabilities (even very vague/ballpark) to these different outcomes?
Not in any principled way, no. I think the action threshold (“How large/small would the probability have to be in order to make a for-me-actionable difference?”) are quite low if you’re particularly suffering-focused, and quite high if you have a symmetrical/upside-focused view. (This distinction is crude and nowadays I’d caveat that some plausible moral views might not fit on the spectrum.) So in practice, I’d imagine that cruxes are rarely about the probabilities of these scenarios. Still, I think it could be interesting to think about their plausibility and likelihood in a systematic fashion.
Given my lack of knowledge about the different risk factors, I mostly just treat each of the different possible outcomes on your map and the hypothetical “map that also tracked outcomes with astronomical amounts of happiness” as being roughly equal in probability.
At the extremes (very good outcomes vs. very bad ones), the good outcomes seem a lot more likely, because future civilization would want to intentionally bring them about. For the very bad outcomes, things don’t only have to go wrong, but do so in very specific ways.
For the less extreme cases (moderately good vs. moderately bad), I think most options are defensible and treating them as similarly likely certainly seems reasonable.
I like this map! Do you know of anything that attempts to assign probabilities (even very vague/ballpark) to these different outcomes?
As someone who is not particularly “downside-focused,” one thing I find difficult in evaluating the importance of prioritising s-risks vs extinction risks (and then different interventions that could be used to address them) is just not being able to get my head around which sorts of outcomes seem most likely. Given my lack of knowledge about the different risk factors, I mostly just treat each of the different possible outcomes on your map and the hypothetical “map that also tracked outcomes with astronomical amounts of happiness” as being roughly equal in probability.
Sorry for the delayed answer; I had this open but forgot.
Not in any principled way, no. I think the action threshold (“How large/small would the probability have to be in order to make a for-me-actionable difference?”) are quite low if you’re particularly suffering-focused, and quite high if you have a symmetrical/upside-focused view. (This distinction is crude and nowadays I’d caveat that some plausible moral views might not fit on the spectrum.) So in practice, I’d imagine that cruxes are rarely about the probabilities of these scenarios. Still, I think it could be interesting to think about their plausibility and likelihood in a systematic fashion.
At the extremes (very good outcomes vs. very bad ones), the good outcomes seem a lot more likely, because future civilization would want to intentionally bring them about. For the very bad outcomes, things don’t only have to go wrong, but do so in very specific ways.
For the less extreme cases (moderately good vs. moderately bad), I think most options are defensible and treating them as similarly likely certainly seems reasonable.