Sorry for the delayed answer; I had this open but forgot.
I like this map! Do you know of anything that attempts to assign probabilities (even very vague/ballpark) to these different outcomes?
Not in any principled way, no. I think the action threshold (“How large/small would the probability have to be in order to make a for-me-actionable difference?”) are quite low if you’re particularly suffering-focused, and quite high if you have a symmetrical/upside-focused view. (This distinction is crude and nowadays I’d caveat that some plausible moral views might not fit on the spectrum.) So in practice, I’d imagine that cruxes are rarely about the probabilities of these scenarios. Still, I think it could be interesting to think about their plausibility and likelihood in a systematic fashion.
Given my lack of knowledge about the different risk factors, I mostly just treat each of the different possible outcomes on your map and the hypothetical “map that also tracked outcomes with astronomical amounts of happiness” as being roughly equal in probability.
At the extremes (very good outcomes vs. very bad ones), the good outcomes seem a lot more likely, because future civilization would want to intentionally bring them about. For the very bad outcomes, things don’t only have to go wrong, but do so in very specific ways.
For the less extreme cases (moderately good vs. moderately bad), I think most options are defensible and treating them as similarly likely certainly seems reasonable.
Sorry for the delayed answer; I had this open but forgot.
Not in any principled way, no. I think the action threshold (“How large/small would the probability have to be in order to make a for-me-actionable difference?”) are quite low if you’re particularly suffering-focused, and quite high if you have a symmetrical/upside-focused view. (This distinction is crude and nowadays I’d caveat that some plausible moral views might not fit on the spectrum.) So in practice, I’d imagine that cruxes are rarely about the probabilities of these scenarios. Still, I think it could be interesting to think about their plausibility and likelihood in a systematic fashion.
At the extremes (very good outcomes vs. very bad ones), the good outcomes seem a lot more likely, because future civilization would want to intentionally bring them about. For the very bad outcomes, things don’t only have to go wrong, but do so in very specific ways.
For the less extreme cases (moderately good vs. moderately bad), I think most options are defensible and treating them as similarly likely certainly seems reasonable.