I’d guess that we don’t have to think much about which world we’re saving. My reasoning would be that the expected value of the world in the long-run is mostly predictable from macrostrategic—even philosophical—considerations. e.g. agents more often seek things that make them happier. The overall level of preference fulfilment that is physically possible might be very large. There’s not much reason to think that pain is easier to create than pleasure (see [1] for an exploration of the question), and we’d expect the sum of recreation and positive incentivization to be more than the amount of disincentivization (e.g. retribution or torture).
I think a proper analysis of this would vindicate the existential risk view as a simplification of maximize utility (modulo problems with infinite ethics (!)). But I agree that all of this needs to be argued for.
I agree that it’s totally plausible that, once all the considerations are properly analyzed, we’ll wind up vindicating the existential risk view as a simplification of “maximize utility”. But in the meantime, unless one is very confident or thinks doom is very near, “properly analyze the considerations” strikes me as a better simplification of “maximize utility”.
Even if you do think possible doom is near, you might want an intermediate simplification like “some people think about consequentialist philosophy while most mitigate catastrophes that would put this thinking process at risk”.
I’d guess that we don’t have to think much about which world we’re saving. My reasoning would be that the expected value of the world in the long-run is mostly predictable from macrostrategic—even philosophical—considerations. e.g. agents more often seek things that make them happier. The overall level of preference fulfilment that is physically possible might be very large. There’s not much reason to think that pain is easier to create than pleasure (see [1] for an exploration of the question), and we’d expect the sum of recreation and positive incentivization to be more than the amount of disincentivization (e.g. retribution or torture).
I think a proper analysis of this would vindicate the existential risk view as a simplification of maximize utility (modulo problems with infinite ethics (!)). But I agree that all of this needs to be argued for.
1. http://reflectivedisequilibrium.blogspot.com/2012/03/are-pain-and-pleasure-equally-energy.html
I agree that it’s totally plausible that, once all the considerations are properly analyzed, we’ll wind up vindicating the existential risk view as a simplification of “maximize utility”. But in the meantime, unless one is very confident or thinks doom is very near, “properly analyze the considerations” strikes me as a better simplification of “maximize utility”.
Even if you do think possible doom is near, you might want an intermediate simplification like “some people think about consequentialist philosophy while most mitigate catastrophes that would put this thinking process at risk”.
Agreed.