If an event occurs that permanently locks us in to an astronomically good future that is <X% as valuable as the optimal future, has an existential catastrophe occurred? I’d like to use the language such that the answer is “no” for any value of X that still allows for the future to intuitively seem “astronomically good.” If it seems like a future is astronomically good, saying an existential catastrophe has occurred in that future before all the good stuff happened feels wrong.
If it feels wrong to you, that’s probably a reason against being a risk-neutral expected value-maximizing total utilitarian. It’s possible the difference between extinction and an astronomically good default is far smaller than the difference between that default and how good it could realistically be (say by creating far more flourishing sentient beings), and the point of “existential risk”/”existential catastrophe” is to capture the stakes.
Also, moral uncertainty may push away from going for the optimum according to one view, since the optimum according to one view may sacrifice a lot on other views. However, as long as the individuals lead robustly positive existences across views, and you’re committed to totalism in general, then more individuals with similar existences is linearly better.
If an event occurs that permanently locks us in to an astronomically good future that is <X% as valuable as the optimal future, has an existential catastrophe occurred? I’d like to use the language such that the answer is “no” for any value of X that still allows for the future to intuitively seem “astronomically good.” If it seems like a future is astronomically good, saying an existential catastrophe has occurred in that future before all the good stuff happened feels wrong.
If it feels wrong to you, that’s probably a reason against being a risk-neutral expected value-maximizing total utilitarian. It’s possible the difference between extinction and an astronomically good default is far smaller than the difference between that default and how good it could realistically be (say by creating far more flourishing sentient beings), and the point of “existential risk”/”existential catastrophe” is to capture the stakes.
Also, moral uncertainty may push away from going for the optimum according to one view, since the optimum according to one view may sacrifice a lot on other views. However, as long as the individuals lead robustly positive existences across views, and you’re committed to totalism in general, then more individuals with similar existences is linearly better.
I said more on this thought here.