This seems odd to consider an ‘existential’ risk—there are many ways in which we can imagine positive or negative changes to expected future quality of life (see for example Beckstead’s idea of trajectory change). Classing low-value-but-interstellar outcomes as existential catastrophes seems unhelpful both since it introduces definitional ambiguity over how much net welfare must be lost for them to qualify, and since questions of expected future quality of life are very distinct from questions of future quantity of life, and so seem like they should be asked separately.
But I feel like it’d be more confusing at this point to start using “existential risk” to mean “extinction risk” given the body of literature that’s gone in for the former?
I certainly don’t think we should keep using the old terms with different meanings. I suggest using some new, cleaner terms that are more conducive to probabilistic thinking. In practice I’m sure people will still talk about existential risk for the reason you give, but perhaps less so, or perhaps specifically when talking about less probabilistic concepts such as population ethics discussions.
Thanks for this post!
I strongly agree with this:
But I feel like it’d be more confusing at this point to start using “existential risk” to mean “extinction risk” given the body of literature that’s gone in for the former?
I certainly don’t think we should keep using the old terms with different meanings. I suggest using some new, cleaner terms that are more conducive to probabilistic thinking. In practice I’m sure people will still talk about existential risk for the reason you give, but perhaps less so, or perhaps specifically when talking about less probabilistic concepts such as population ethics discussions.