I agree it would be hard to avoid something like (2) with views that respect stochastic dominance with respect to the total welfare of outcomes, including background value (not difference-making). That includes maximizing the EV of a bounded increasing function of total welfare, as well as REU and WLU for total welfare, all with respect to outcomes including background value and not difference-making. Tarsney, 2020 makes it hard, and following it, x-risk reduction might be best across those views (Tarsney, 2023, footnote 43, although he says it could depend on the probabilities). See the following footnote for another possible exception with outcome risk aversion, relevant for extinction risk reduction.[1]
If you change the underlying order on outcomes from total welfare, you can also avoid nearly 50-50 actions from dominating things that are more likely to make a positive difference. A steep enough geometric discounting of future welfare[2] or a low enough future cutoff for consideration (a kind of view RP considered here) + excluding invertebrates might work.
I also think difference-making views, as you suggest, would avoid (2).
Basically, I’m actually not confident that this type of modification should matter much for us. The axiom choices matter here for which theory to put the most weight in but I’m unsure this type of distinction is buying you much practically if, say, after you make them you still end up with a set of theoretical options that look in practice like pure EV vs EV with rounding down vs something like WLU vs something like REU.
Tarsney, 2020 requires a lot of very uncertain background value that’s statistically independent from the effects of the intervention. Too little background value could be statistically independent, because a lot of things are jointly determined or correlated across the universe, e.g. sentience, moral weights, and, perhaps most importantly, (the sign of) the average welfare across the universe.
Conditional on generally horrible welfare across aliens (non-Earth-originating moral patients, generally), we should worry more that our descendants (or Earth-originating moral patients) will have horrible welfare if we don’t go extinct.
Then you just need to be sufficiently risk-averse, and something slighly better than 50-50 that could make things far worse could look bad overall.
I don’t know if this actually works in practice, though. It’ll depend on the particulars, and I’ve ignored our descendants’ possible effects on aliens.
I agree it would be hard to avoid something like (2) with views that respect stochastic dominance with respect to the total welfare of outcomes, including background value (not difference-making). That includes maximizing the EV of a bounded increasing function of total welfare, as well as REU and WLU for total welfare, all with respect to outcomes including background value and not difference-making. Tarsney, 2020 makes it hard, and following it, x-risk reduction might be best across those views (Tarsney, 2023, footnote 43, although he says it could depend on the probabilities). See the following footnote for another possible exception with outcome risk aversion, relevant for extinction risk reduction.[1]
If you change the underlying order on outcomes from total welfare, you can also avoid nearly 50-50 actions from dominating things that are more likely to make a positive difference. A steep enough geometric discounting of future welfare[2] or a low enough future cutoff for consideration (a kind of view RP considered here) + excluding invertebrates might work.
I also think difference-making views, as you suggest, would avoid (2).
Fair. This seems right to me.
Tarsney, 2020 requires a lot of very uncertain background value that’s statistically independent from the effects of the intervention. Too little background value could be statistically independent, because a lot of things are jointly determined or correlated across the universe, e.g. sentience, moral weights, and, perhaps most importantly, (the sign of) the average welfare across the universe.
Conditional on generally horrible welfare across aliens (non-Earth-originating moral patients, generally), we should worry more that our descendants (or Earth-originating moral patients) will have horrible welfare if we don’t go extinct.
Then you just need to be sufficiently risk-averse, and something slighly better than 50-50 that could make things far worse could look bad overall.
I don’t know if this actually works in practice, though. It’ll depend on the particulars, and I’ve ignored our descendants’ possible effects on aliens.
And far away moral patients, if you accept acausal influence.