S-risks seem like they could very well be a big part of the overall strategy picture (even when not given normative priority and just considered as part of the total picture), and they aren’t captured by the short-term x-risk view.
An existential risk is a risk that threatens the destruction of humanity’s long-term potential. But s-risks are worrisome not only because of the potential they threaten to destroy, but also because of what they threaten to replace this potential with (astronomical amounts of suffering).
I think the “short-term x-risk view” is meant to refer to everyone dying, and ignoring the lost long-term potential. Maybe s-risks could be similarly harmful in the short term, too.
Spreading wild animals to space isn’t bad for any currently existing humans or animals, so it isn’t counted under thoughtful short-termism or is discounted heavily. Same with a variety of S-risks (e.g. eventual stable totalitarian regime 100+ years out, slow space colonization, slow build up of Matrioshka brains with suffering simulations/sub-routines, etc.)
Oop, thanks for correction. To be honest I’m not sure what exactly I was thinking originally, but maybe this is true for non-AI S-risks that are slow, like spreading wild animals to space? I think this is mostly just false tho >:/
Why not?
An existential risk is a risk that threatens the destruction of humanity’s long-term potential. But s-risks are worrisome not only because of the potential they threaten to destroy, but also because of what they threaten to replace this potential with (astronomical amounts of suffering).
I think the “short-term x-risk view” is meant to refer to everyone dying, and ignoring the lost long-term potential. Maybe s-risks could be similarly harmful in the short term, too.
Spreading wild animals to space isn’t bad for any currently existing humans or animals, so it isn’t counted under thoughtful short-termism or is discounted heavily. Same with a variety of S-risks (e.g. eventual stable totalitarian regime 100+ years out, slow space colonization, slow build up of Matrioshka brains with suffering simulations/sub-routines, etc.)
Oop, thanks for correction. To be honest I’m not sure what exactly I was thinking originally, but maybe this is true for non-AI S-risks that are slow, like spreading wild animals to space? I think this is mostly just false tho >:/