How I learned to stop worrying and love X-risk
If the future is bad, existential risk (x-risk) is good.
A crux of the argument for reducing x-risk, as characterised by 80,000 Hours, is that:
There has been significant moral progress over time—medical advances and so on
Therefore we’re optimistic this will continue.
Or, people in the future will be better at deciding whether its desirable for civilisation to expand, stay the same size, or shrink.
However there’s another premise that contradicts the idea of leave any final decisions to the wisdom of future generations.
The very reason many of us prioritise x-risk is because we see that humanity is increasingly discovering technology with more destructive power than we have the ability to wisely use. Nuclear weapons, bioweapons and artificial intelligence.
If more recent generations are increasingly creating catastrophically risky situations, could then it not be argued that moral progress has gone backwards?
We now have s-risks associated with factory farming, digital sentience and advanced torture techniques, that our ancestors did not.
If future generations will morally degenerate, X-risk may in fact not be so bad. It may instead advert S-risk, such as the proliferation of wild animal suffering throughout a earth colonised universe.
I don’t believe the future will necessarily be bad, but because of the long run trend in increasing X-risk and S-risk, I don’t necessarily assume it will be good just because of medical advances, poverty reduction and so on.
It gives me enough pause not to prioritise X-risk reduction, and worry about more important causes.
FWIW standard conceptions of existential risk would categorize suffering risks as a type of existential risk. For example, Nick Bostrom has defined it as “threats that could cause our extinction or destroy the potential of Earth-originating intelligent life.” (emphasis mine)