Then I think for practical decision-making purposes we should apply a heavy discount to world A) — in that world, what everyone else would eventually want isn’t all that close to what I would eventually want. Moreover what me-of-tomorrow would eventually want probably isn’t all that close to what me-of-today would eventually want. So it’s much much less likely that the world we end up with even if we save it is close to the ideal one by my lights. Moreover, even though these worlds possibly differ significantly, I don’t feel like from my present position I have that much reason to be opinionated between them; it’s unclear that I’d greatly imperfect worlds according to the extrapolated volition of some future-me, relative to the imperfect worlds according to the extrapolated volition of someone else I think is pretty reasonable.
You seem to be assuming that people’s extrapolated views in world A will be completely uncorrelated with their current views/culture/background, which seems a strange assumption to make.
People’s extrapolated views could be (in part) selfish or partial, which is an additional reason that extrapolated views of you at different times may be closer than that of strangers.
People’s extrapolated views not converging doesn’t directly imply “it’s much much less likely that the world we end up with even if we save it is close to the ideal one by my lights” because everyone could still get close to what they want through trade/compromise, or you (and/or others with extrapolated views similar to yours) could end up controlling most of the future by winning the relevant competitions.
It’s not clear that applying a heavy discount to world A makes sense, regardless of the above, because we’re dealing with “logical risk” which seems tricky in terms of decision theory.
On 1--3, I definitely agree that I may prudentially prefer some possibilities than others. I’ve been assuming that from a consequentialist moral perspective the distribution of future outcomes still looks like the one I give in this post, but I guess it should actually look quite different. (I think what’s going on is that in some sense I don’t really believe in world A, so haven’t explored the ramifications properly.)
You seem to be assuming that people’s extrapolated views in world A will be completely uncorrelated with their current views/culture/background, which seems a strange assumption to make.
People’s extrapolated views could be (in part) selfish or partial, which is an additional reason that extrapolated views of you at different times may be closer than that of strangers.
People’s extrapolated views not converging doesn’t directly imply “it’s much much less likely that the world we end up with even if we save it is close to the ideal one by my lights” because everyone could still get close to what they want through trade/compromise, or you (and/or others with extrapolated views similar to yours) could end up controlling most of the future by winning the relevant competitions.
It’s not clear that applying a heavy discount to world A makes sense, regardless of the above, because we’re dealing with “logical risk” which seems tricky in terms of decision theory.
4 is a great point, thanks.
On 1--3, I definitely agree that I may prudentially prefer some possibilities than others. I’ve been assuming that from a consequentialist moral perspective the distribution of future outcomes still looks like the one I give in this post, but I guess it should actually look quite different. (I think what’s going on is that in some sense I don’t really believe in world A, so haven’t explored the ramifications properly.)