Two mistakes people sometimes make are discussing s-risks as if theyâre entirely distinct from existential risks, or discussing s-risks as if theyâre a subset of existential risks. In reality:
There are substantial overlaps between suffering catastrophes and existential catastrophes, because some existential catastrophes would involve or result in suffering on an astronomical scale.
[...]
But there could also be suffering catastrophes that arenât existential catastrophes, because they donât involve the destruction of (the vast majority of) humanityâs long-term potential.
This depends on oneâs moral theory or values (or the âcorrectâ moral theory or values), because, as noted above, that affects what counts as fulfilling or destroying humanityâs long-term potential.
For example, the Center on Long-Term Risk notes: âDepending on how you understand the [idea of loss of âpotentialâ in definitions] of [existential risks], there actually may be s-risks which arenât [existential risks]. This would be true if you think that reaching the full potential of Earth-originating intelligent life could involve suffering on an astronomical scale, i.e., the realisation of an s-risk. Think of a quarter of the universe filled with suffering, and three quarters filled with happiness. Considering such an outcome to be the full potential of humanity seems to require the view that the suffering involved would be outweighed by other, desirable features of reaching this full potential, such as vast amounts of happiness.â
In contrast, given a sufficiently suffering-focused theory of ethics, anything other than near-complete eradication of suffering might count as an existential catastrophe.
Your second paragraph makes sense to me, and is an interesting point I donât think Iâd thought of.
Yeah, this is true. On this, Iâve previously written that:
Your second paragraph makes sense to me, and is an interesting point I donât think Iâd thought of.