To what degree are the differences between longtermists who prioritize s-risks and longtermists who prioritize x-risks driven by moral disagreements about the relative importance of suffering versus happiness, rather than by factual disagreements about the relative magnitude of s-risks versus x-risks?
Most suffering-focused EAs I know agree about the facts: there’s a small chance that AI-powered space colonization will create flourishing futures highly optimized for happiness and other forms of moral value, and this small chance of a vast payoff dominates the expected value of the future on many moral views. I think people generally agree that the typical/median future scenario will be much better than the present (for reasons like this one, though there’s much more to say about that), though in absolute terms probably not nearly as good as it could be.
So in my perception, most of the disagreement comes from moral views, not from perceptions of the likelihood or severity of s-risks.
Great question! I think both moral and factual disagreements play a significant role. David Althaus suggests a quantitative approach of distinguishing between the “N-ratio”, which measures how much weight one gives to suffering vs. happiness, and the “E-ratio”, which refers to one’s empirical beliefs regarding the ratio of future happiness and suffering. You could prioritise s-risk because of a high N-ratio (i.e. suffering-focused values) or because of a low E-ratio (i.e. pessimistic views of the future).
That suggests that moral and factual disagreements are comparably important. But if I had to decide, I’d guess that moral disagreements are the bigger factor, because there is perhaps more convergence (not necessarily a high degree in absolute terms) on empirical matters. In my experience, many who prioritise suffering reduction still agree to some extent with some arguments for optimism about the future (although not with extreme versions, like claiming that the ratio is “1000000 to 1”, or that the future will automatically be amazing if we avoid extinction). For instance, if you were to combine my factual beliefs with the values of, say, Will MacAskill, then I think the result would probably not consider s-risks a top priority (though still worthy of some concern).
In addition, I am increasingly thinking that “x-risk vs s-risk” is perhaps a false dichotomy, and thinking in those terms may not always be helpful (despite having written much on s-risks myself). There are far more ways to improve the long-term future than this framing suggests, and we should look for interventions that steer the future in robustly positive directions.
I’m also interested in answers to this question. I’d add the following nit-picky points:
X-risks and s-risks are substantially overlapping categories (in particular, many unrecoverable dystopia scenarios also contain astronomical suffering), so it’s possible a more fruitful framing is prioritisation of s-risks vs other x-risks, s-risks in particular vs x-risks as a whole, or s-risks vs extinction risks.
There could also be other moral or factual disagreements that help explain differences in the extent to which different longtermists prioritise s-risks relative to other x-risks.
In particular, I tentatively suspect that there’s a weak/moderate correlation between level of prioritisation of s-risks and level of moral concern for nonhuman animals.
If this correlation exists, I expect it’d be partly a factual disagreement about sentience (and thus arguably a factual disagreement about the relative magnitudes of s- and x-risks), but also partly a moral disagreement about how much moral weight/moral status animals warrant.
And I’d expect the correlation to partly be “mere correlation”, and partly a causal factor in these prioritisation decisions.
To what degree are the differences between longtermists who prioritize s-risks and longtermists who prioritize x-risks driven by moral disagreements about the relative importance of suffering versus happiness, rather than by factual disagreements about the relative magnitude of s-risks versus x-risks?
Most suffering-focused EAs I know agree about the facts: there’s a small chance that AI-powered space colonization will create flourishing futures highly optimized for happiness and other forms of moral value, and this small chance of a vast payoff dominates the expected value of the future on many moral views. I think people generally agree that the typical/median future scenario will be much better than the present (for reasons like this one, though there’s much more to say about that), though in absolute terms probably not nearly as good as it could be.
So in my perception, most of the disagreement comes from moral views, not from perceptions of the likelihood or severity of s-risks.
Great question! I think both moral and factual disagreements play a significant role. David Althaus suggests a quantitative approach of distinguishing between the “N-ratio”, which measures how much weight one gives to suffering vs. happiness, and the “E-ratio”, which refers to one’s empirical beliefs regarding the ratio of future happiness and suffering. You could prioritise s-risk because of a high N-ratio (i.e. suffering-focused values) or because of a low E-ratio (i.e. pessimistic views of the future).
That suggests that moral and factual disagreements are comparably important. But if I had to decide, I’d guess that moral disagreements are the bigger factor, because there is perhaps more convergence (not necessarily a high degree in absolute terms) on empirical matters. In my experience, many who prioritise suffering reduction still agree to some extent with some arguments for optimism about the future (although not with extreme versions, like claiming that the ratio is “1000000 to 1”, or that the future will automatically be amazing if we avoid extinction). For instance, if you were to combine my factual beliefs with the values of, say, Will MacAskill, then I think the result would probably not consider s-risks a top priority (though still worthy of some concern).
In addition, I am increasingly thinking that “x-risk vs s-risk” is perhaps a false dichotomy, and thinking in those terms may not always be helpful (despite having written much on s-risks myself). There are far more ways to improve the long-term future than this framing suggests, and we should look for interventions that steer the future in robustly positive directions.
Strong-upvoted this question. Follow-up question: what kind of research could resolve any factual disagreements?
I’m also interested in answers to this question. I’d add the following nit-picky points:
X-risks and s-risks are substantially overlapping categories (in particular, many unrecoverable dystopia scenarios also contain astronomical suffering), so it’s possible a more fruitful framing is prioritisation of s-risks vs other x-risks, s-risks in particular vs x-risks as a whole, or s-risks vs extinction risks.
There could also be other moral or factual disagreements that help explain differences in the extent to which different longtermists prioritise s-risks relative to other x-risks.
In particular, I tentatively suspect that there’s a weak/moderate correlation between level of prioritisation of s-risks and level of moral concern for nonhuman animals.
If this correlation exists, I expect it’d be partly a factual disagreement about sentience (and thus arguably a factual disagreement about the relative magnitudes of s- and x-risks), but also partly a moral disagreement about how much moral weight/moral status animals warrant.
And I’d expect the correlation to partly be “mere correlation”, and partly a causal factor in these prioritisation decisions.