One is the Hedonistic Imperative/suffering abolition: using biotechnology to modify sentient life to not suffer (or not suffer significantly, and ensuring artificial sentience does not suffer?). David Pearce, a negative utilitarian, is the founding figure for this.
David Pearce, a negative utilitarian, is the founding figure for [suffering abolition].
It might be of interest for some that Pearce is/was skeptical about possibility or probability of s-risks related to digital sentience and space colonization: see his reply to What does David Pearce think about S-risks (suffering risks)? on Quora (where he also mentions the moral hazard of “understanding the biological basis of unpleasant experience in order to make suffering physically impossible”).
I think a plausible win condition is that society has some level moral concern for all sentient beings (it doesn’t necessarily need to be entirely suffering-focused) as well as stable mechanisms to implement positive-sum cooperation or compromise. The latter guarantees that moral concerns are taken into account and possible gains from trade can be achieved. (An example for this could be cultivated meat, which allows us to reduce animal suffering while accommodating the interests of meat eaters.)
However, I think suffering reducers in particular should perhaps not focus on imagining best-case outcomes. It is plausible (though not obvious) that we should focus on preventing worst-case outcomes rather than shooting for utopian outcomes, as the difference in expected suffering between a worst-case and the median outcome may be much greater than the difference between the median outcome and the best possible future.
What grand futures do suffering-focused altruists tend to imagine? Or in other words, how plausible win conditions look like?
One is the Hedonistic Imperative/suffering abolition: using biotechnology to modify sentient life to not suffer (or not suffer significantly, and ensuring artificial sentience does not suffer?). David Pearce, a negative utilitarian, is the founding figure for this.
It might be of interest for some that Pearce is/was skeptical about possibility or probability of s-risks related to digital sentience and space colonization: see his reply to What does David Pearce think about S-risks (suffering risks)? on Quora (where he also mentions the moral hazard of “understanding the biological basis of unpleasant experience in order to make suffering physically impossible”).
I think a plausible win condition is that society has some level moral concern for all sentient beings (it doesn’t necessarily need to be entirely suffering-focused) as well as stable mechanisms to implement positive-sum cooperation or compromise. The latter guarantees that moral concerns are taken into account and possible gains from trade can be achieved. (An example for this could be cultivated meat, which allows us to reduce animal suffering while accommodating the interests of meat eaters.)
However, I think suffering reducers in particular should perhaps not focus on imagining best-case outcomes. It is plausible (though not obvious) that we should focus on preventing worst-case outcomes rather than shooting for utopian outcomes, as the difference in expected suffering between a worst-case and the median outcome may be much greater than the difference between the median outcome and the best possible future.