If it feels wrong to you, that’s probably a reason against being a risk-neutral expected value-maximizing total utilitarian. It’s possible the difference between extinction and an astronomically good default is far smaller than the difference between that default and how good it could realistically be (say by creating far more flourishing sentient beings), and the point of “existential risk”/”existential catastrophe” is to capture the stakes.
Also, moral uncertainty may push away from going for the optimum according to one view, since the optimum according to one view may sacrifice a lot on other views. However, as long as the individuals lead robustly positive existences across views, and you’re committed to totalism in general, then more individuals with similar existences is linearly better.
If it feels wrong to you, that’s probably a reason against being a risk-neutral expected value-maximizing total utilitarian. It’s possible the difference between extinction and an astronomically good default is far smaller than the difference between that default and how good it could realistically be (say by creating far more flourishing sentient beings), and the point of “existential risk”/”existential catastrophe” is to capture the stakes.
Also, moral uncertainty may push away from going for the optimum according to one view, since the optimum according to one view may sacrifice a lot on other views. However, as long as the individuals lead robustly positive existences across views, and you’re committed to totalism in general, then more individuals with similar existences is linearly better.