I would personally go a little further. I think the concept of existential risk is sufficiently vague for it to be better to mostly focus on clearer metrics (e.g. a suffering-free collapse of all value would be maximally good for negative utilitarians, but would be an existential risk for most people). For example, extinction risk, probability of a given drop in global population /ā GDP /ā democracy index, or probability of global population /ā GDP /ā democracy index remaining smaller than the previous maximum for a certain time.
Nice points, Isaac!
I would personally go a little further. I think the concept of existential risk is sufficiently vague for it to be better to mostly focus on clearer metrics (e.g. a suffering-free collapse of all value would be maximally good for negative utilitarians, but would be an existential risk for most people). For example, extinction risk, probability of a given drop in global population /ā GDP /ā democracy index, or probability of global population /ā GDP /ā democracy index remaining smaller than the previous maximum for a certain time.