I just came across this post and wanted to emphasize one point related to:
We never saturate the universe with maximally flourishing beings
The term “disappointing future” is a bit of a misnomer in the sense that there are many possible “disappointing futures” that are astronomically good—and therefore do not seem disappointing at first glance.
E.g. If the intrinsic value of the year-long conscious experiences of the billion happiest people on Earth in 2020 is equal to 10^9 utilons, we could say that an “astronomically good future” is a future worth >10^30 utilons.
While many astronomical good futures are near in value to an optimal future, many are <1% as valuable as an optimal future (and even <<1% as valuable), despite how they might seem like unimaginably good Bostrom utopias to us considering them today.
Such futures may involve life saturating the universe for billions of years with beings that flourish beyond our imagination today because their minds allow for much more valuable experiences than ours do, and yet they may still fall far short of what is optimal and therefore be “disappointing” and even existentially catastrophic (due to the (not-horrible-and-in-fact-extremely-good-)sub-optimality getting locked in).
The central examples of “futures with existential catastrophes” and “disappointing futures” that come to mind are not “astronomically good futures,” but thinking about these fantastic futures that still fall far short of our potential may be relevant if we’re trying to figure out how to maximize total utility.
If an event occurs that permanently locks us in to an astronomically good future that is <X% as valuable as the optimal future, has an existential catastrophe occurred? I’d like to use the language such that the answer is “no” for any value of X that still allows for the future to intuitively seem “astronomically good.” If it seems like a future is astronomically good, saying an existential catastrophe has occurred in that future before all the good stuff happened feels wrong.
If it feels wrong to you, that’s probably a reason against being a risk-neutral expected value-maximizing total utilitarian. It’s possible the difference between extinction and an astronomically good default is far smaller than the difference between that default and how good it could realistically be (say by creating far more flourishing sentient beings), and the point of “existential risk”/”existential catastrophe” is to capture the stakes.
Also, moral uncertainty may push away from going for the optimum according to one view, since the optimum according to one view may sacrifice a lot on other views. However, as long as the individuals lead robustly positive existences across views, and you’re committed to totalism in general, then more individuals with similar existences is linearly better.
I just came across this post and wanted to emphasize one point related to:
The term “disappointing future” is a bit of a misnomer in the sense that there are many possible “disappointing futures” that are astronomically good—and therefore do not seem disappointing at first glance.
E.g. If the intrinsic value of the year-long conscious experiences of the billion happiest people on Earth in 2020 is equal to 10^9 utilons, we could say that an “astronomically good future” is a future worth >10^30 utilons.
While many astronomical good futures are near in value to an optimal future, many are <1% as valuable as an optimal future (and even <<1% as valuable), despite how they might seem like unimaginably good Bostrom utopias to us considering them today.
Such futures may involve life saturating the universe for billions of years with beings that flourish beyond our imagination today because their minds allow for much more valuable experiences than ours do, and yet they may still fall far short of what is optimal and therefore be “disappointing” and even existentially catastrophic (due to the (not-horrible-and-in-fact-extremely-good-)sub-optimality getting locked in).
The central examples of “futures with existential catastrophes” and “disappointing futures” that come to mind are not “astronomically good futures,” but thinking about these fantastic futures that still fall far short of our potential may be relevant if we’re trying to figure out how to maximize total utility.
If an event occurs that permanently locks us in to an astronomically good future that is <X% as valuable as the optimal future, has an existential catastrophe occurred? I’d like to use the language such that the answer is “no” for any value of X that still allows for the future to intuitively seem “astronomically good.” If it seems like a future is astronomically good, saying an existential catastrophe has occurred in that future before all the good stuff happened feels wrong.
If it feels wrong to you, that’s probably a reason against being a risk-neutral expected value-maximizing total utilitarian. It’s possible the difference between extinction and an astronomically good default is far smaller than the difference between that default and how good it could realistically be (say by creating far more flourishing sentient beings), and the point of “existential risk”/”existential catastrophe” is to capture the stakes.
Also, moral uncertainty may push away from going for the optimum according to one view, since the optimum according to one view may sacrifice a lot on other views. However, as long as the individuals lead robustly positive existences across views, and you’re committed to totalism in general, then more individuals with similar existences is linearly better.
I said more on this thought here.