This disanalogy between the x-risk and s-risk definitions is a source of ongoing frustration to me, as s-risk discourse thus often conflates hellish futures (which are existential risks, and especially bad ones), or possibilities of suffering on a scale significant relative to the potential for suffering (or what we might expect), with bad events many orders of magnitude smaller or futures that are utopian by common sense standards and compared to our world or the downside potential.
This is a fair enough critique. But I think that from the perspective of suffering-focused and many other non-total-symmetric-utilitarian value systems, the definition of x-risk is just as frustrating in its breadth. To such value systems, there is a massive moral difference between the badness of human extinction and a locked-in dystopian future, so they are not necessarily in “the same ballpark of importance.” The former is only critical to the upside potential of the future if one has a non-obvious symmetric utilitarian conception of (moral) upside potential, or certain deontological premises that are also non-obvious.
This is a fair enough critique. But I think that from the perspective of suffering-focused and many other non-total-symmetric-utilitarian value systems, the definition of x-risk is just as frustrating in its breadth. To such value systems, there is a massive moral difference between the badness of human extinction and a locked-in dystopian future, so they are not necessarily in “the same ballpark of importance.” The former is only critical to the upside potential of the future if one has a non-obvious symmetric utilitarian conception of (moral) upside potential, or certain deontological premises that are also non-obvious.