It’s not totally clear to me how much can be done to optimize chances of ~optimal future (as opposed to, there’s probably a lot more that can be done to decrease X-risk), but I do have an intuition that probably some good work on the issue can be done. This does seem like an under-explored area, and I would personally like to see more research in it.
I’d also like to signal-boost this relevant paper by Bostrom and Shulman: https://nickbostrom.com/papers/digital-minds.pdf which proposes that an ~optimal compromise (along multiple axes) between human interests and totalist moral stances could be achieved by, for instance, filling 99.99% of the universe with hedonium and leaving the rest to (post-)humans.
I agree with the general point that:
E[~optimal future] - E[good future] >> E[good future] - E[meh/no future]
It’s not totally clear to me how much can be done to optimize chances of ~optimal future (as opposed to, there’s probably a lot more that can be done to decrease X-risk), but I do have an intuition that probably some good work on the issue can be done. This does seem like an under-explored area, and I would personally like to see more research in it.
I’d also like to signal-boost this relevant paper by Bostrom and Shulman:
https://nickbostrom.com/papers/digital-minds.pdf
which proposes that an ~optimal compromise (along multiple axes) between human interests and totalist moral stances could be achieved by, for instance, filling 99.99% of the universe with hedonium and leaving the rest to (post-)humans.
Thanks for sharing this paper, I had not heard of it before and it sounds really interesting.