I just came across this post and wanted to emphasize one point related to:
We never saturate the universe with maximally flourishing beings
The term âdisappointing futureâ is a bit of a misnomer in the sense that there are many possible âdisappointing futuresâ that are astronomically goodâand therefore do not seem disappointing at first glance.
E.g. If the intrinsic value of the year-long conscious experiences of the billion happiest people on Earth in 2020 is equal to 10^9 utilons, we could say that an âastronomically good futureâ is a future worth >10^30 utilons.
While many astronomical good futures are near in value to an optimal future, many are <1% as valuable as an optimal future (and even <<1% as valuable), despite how they might seem like unimaginably good Bostrom utopias to us considering them today.
Such futures may involve life saturating the universe for billions of years with beings that flourish beyond our imagination today because their minds allow for much more valuable experiences than ours do, and yet they may still fall far short of what is optimal and therefore be âdisappointingâ and even existentially catastrophic (due to the (not-horrible-and-in-fact-extremely-good-)sub-optimality getting locked in).
The central examples of âfutures with existential catastrophesâ and âdisappointing futuresâ that come to mind are not âastronomically good futures,â but thinking about these fantastic futures that still fall far short of our potential may be relevant if weâre trying to figure out how to maximize total utility.
If an event occurs that permanently locks us in to an astronomically good future that is <X% as valuable as the optimal future, has an existential catastrophe occurred? Iâd like to use the language such that the answer is ânoâ for any value of X that still allows for the future to intuitively seem âastronomically good.â If it seems like a future is astronomically good, saying an existential catastrophe has occurred in that future before all the good stuff happened feels wrong.
If it feels wrong to you, thatâs probably a reason against being a risk-neutral expected value-maximizing total utilitarian. Itâs possible the difference between extinction and an astronomically good default is far smaller than the difference between that default and how good it could realistically be (say by creating far more flourishing sentient beings), and the point of âexistential riskâ/ââexistential catastropheâ is to capture the stakes.
Also, moral uncertainty may push away from going for the optimum according to one view, since the optimum according to one view may sacrifice a lot on other views. However, as long as the individuals lead robustly positive existences across views, and youâre committed to totalism in general, then more individuals with similar existences is linearly better.
I just came across this post and wanted to emphasize one point related to:
The term âdisappointing futureâ is a bit of a misnomer in the sense that there are many possible âdisappointing futuresâ that are astronomically goodâand therefore do not seem disappointing at first glance.
E.g. If the intrinsic value of the year-long conscious experiences of the billion happiest people on Earth in 2020 is equal to 10^9 utilons, we could say that an âastronomically good futureâ is a future worth >10^30 utilons.
While many astronomical good futures are near in value to an optimal future, many are <1% as valuable as an optimal future (and even <<1% as valuable), despite how they might seem like unimaginably good Bostrom utopias to us considering them today.
Such futures may involve life saturating the universe for billions of years with beings that flourish beyond our imagination today because their minds allow for much more valuable experiences than ours do, and yet they may still fall far short of what is optimal and therefore be âdisappointingâ and even existentially catastrophic (due to the (not-horrible-and-in-fact-extremely-good-)sub-optimality getting locked in).
The central examples of âfutures with existential catastrophesâ and âdisappointing futuresâ that come to mind are not âastronomically good futures,â but thinking about these fantastic futures that still fall far short of our potential may be relevant if weâre trying to figure out how to maximize total utility.
If an event occurs that permanently locks us in to an astronomically good future that is <X% as valuable as the optimal future, has an existential catastrophe occurred? Iâd like to use the language such that the answer is ânoâ for any value of X that still allows for the future to intuitively seem âastronomically good.â If it seems like a future is astronomically good, saying an existential catastrophe has occurred in that future before all the good stuff happened feels wrong.
If it feels wrong to you, thatâs probably a reason against being a risk-neutral expected value-maximizing total utilitarian. Itâs possible the difference between extinction and an astronomically good default is far smaller than the difference between that default and how good it could realistically be (say by creating far more flourishing sentient beings), and the point of âexistential riskâ/ââexistential catastropheâ is to capture the stakes.
Also, moral uncertainty may push away from going for the optimum according to one view, since the optimum according to one view may sacrifice a lot on other views. However, as long as the individuals lead robustly positive existences across views, and youâre committed to totalism in general, then more individuals with similar existences is linearly better.
I said more on this thought here.