I think it’s worth clarifying that you mean worse-than-exrinction futures according to asymmetric views. S-risks can still happen in a better-than-extinction future according to classical utilitarianism, say, and could still be worth reducing.
There might be other interventions to increase wellbeing according to some person-affecting views, by increasing positive wellbeing without requiring additional people, but do any involve attractor states? Maybe genetically engineering humans to be happier or otherwise optimizing our descendants (possibly non-biological) for happiness? Maybe it’s better to do this before space colonization, but I think intelligent moral agents would still be motivated to improve their own wellbeing after colonization, so it might not be so pressing for them, although could be for moral patients who have too little agency if we send them out on their own.
Two mistakes people sometimes make are discussing s-risks as if they’re entirely distinct from existential risks, or discussing s-risks as if they’re a subset of existential risks. In reality:
There are substantial overlaps between suffering catastrophes and existential catastrophes, because some existential catastrophes would involve or result in suffering on an astronomical scale.
[...]
But there could also be suffering catastrophes that aren’t existential catastrophes, because they don’t involve the destruction of (the vast majority of) humanity’s long-term potential.
This depends on one’s moral theory or values (or the “correct” moral theory or values), because, as noted above, that affects what counts as fulfilling or destroying humanity’s long-term potential.
For example, the Center on Long-Term Risk notes: “Depending on how you understand the [idea of loss of “potential” in definitions] of [existential risks], there actually may be s-risks which aren’t [existential risks]. This would be true if you think that reaching the full potential of Earth-originating intelligent life could involve suffering on an astronomical scale, i.e., the realisation of an s-risk. Think of a quarter of the universe filled with suffering, and three quarters filled with happiness. Considering such an outcome to be the full potential of humanity seems to require the view that the suffering involved would be outweighed by other, desirable features of reaching this full potential, such as vast amounts of happiness.”
In contrast, given a sufficiently suffering-focused theory of ethics, anything other than near-complete eradication of suffering might count as an existential catastrophe.
Your second paragraph makes sense to me, and is an interesting point I don’t think I’d thought of.
I think it’s worth clarifying that you mean worse-than-exrinction futures according to asymmetric views. S-risks can still happen in a better-than-extinction future according to classical utilitarianism, say, and could still be worth reducing.
There might be other interventions to increase wellbeing according to some person-affecting views, by increasing positive wellbeing without requiring additional people, but do any involve attractor states? Maybe genetically engineering humans to be happier or otherwise optimizing our descendants (possibly non-biological) for happiness? Maybe it’s better to do this before space colonization, but I think intelligent moral agents would still be motivated to improve their own wellbeing after colonization, so it might not be so pressing for them, although could be for moral patients who have too little agency if we send them out on their own.
Yeah, this is true. On this, I’ve previously written that:
Your second paragraph makes sense to me, and is an interesting point I don’t think I’d thought of.