If it feels wrong to you, thatâs probably a reason against being a risk-neutral expected value-maximizing total utilitarian. Itâs possible the difference between extinction and an astronomically good default is far smaller than the difference between that default and how good it could realistically be (say by creating far more flourishing sentient beings), and the point of âexistential riskâ/ââexistential catastropheâ is to capture the stakes.
Also, moral uncertainty may push away from going for the optimum according to one view, since the optimum according to one view may sacrifice a lot on other views. However, as long as the individuals lead robustly positive existences across views, and youâre committed to totalism in general, then more individuals with similar existences is linearly better.
If it feels wrong to you, thatâs probably a reason against being a risk-neutral expected value-maximizing total utilitarian. Itâs possible the difference between extinction and an astronomically good default is far smaller than the difference between that default and how good it could realistically be (say by creating far more flourishing sentient beings), and the point of âexistential riskâ/ââexistential catastropheâ is to capture the stakes.
Also, moral uncertainty may push away from going for the optimum according to one view, since the optimum according to one view may sacrifice a lot on other views. However, as long as the individuals lead robustly positive existences across views, and youâre committed to totalism in general, then more individuals with similar existences is linearly better.