Failed continuation is where humanity doesnât go extinct, but (in Ordâs phrase) âthe destruction of humanityâs longterm potentialâ still occurs in some other way (and thus thereâs still an existential catastrophe).
And âdestruction of humanityâs longterm potentialâ in turn essentially means âpreventing the possibility of humanity ever bringing into existence something close to the best possible futureâ. (Thus, existential risks are not just about humanity.)
Itâs conceivable that vast nonhuman suffering could be a feature of even the best possible future, partly because both âvastâ and âsufferingâ are vague terms. But I mean something like astronomical amounts of suffering among moral patients. (I hadnât really noticed that the phrase I used in the diagram didnât actually make that clear.) And it seems to me quite likely that a future containing that is not close to the best possible future.
Thus, it seems to me likely that locking in such a feature of the future is tantamount to preventing us ever achieving something close to the best future possible.
Does that address your question? (Which is a fair question, in part because it turns out my language wasnât especially precise.)
ETA: Iâm also imagining that this scenario does not involve (premature) human extinction, which is another thing I hadnât made explicit.
Failed continuation is where humanity doesnât go extinct, but (in Ordâs phrase) âthe destruction of humanityâs longterm potentialâ still occurs in some other way (and thus thereâs still an existential catastrophe).
And âdestruction of humanityâs longterm potentialâ in turn essentially means âpreventing the possibility of humanity ever bringing into existence something close to the best possible futureâ. (Thus, existential risks are not just about humanity.)
Itâs conceivable that vast nonhuman suffering could be a feature of even the best possible future, partly because both âvastâ and âsufferingâ are vague terms. But I mean something like astronomical amounts of suffering among moral patients. (I hadnât really noticed that the phrase I used in the diagram didnât actually make that clear.) And it seems to me quite likely that a future containing that is not close to the best possible future.
Thus, it seems to me likely that locking in such a feature of the future is tantamount to preventing us ever achieving something close to the best future possible.
Does that address your question? (Which is a fair question, in part because it turns out my language wasnât especially precise.)
ETA: Iâm also imagining that this scenario does not involve (premature) human extinction, which is another thing I hadnât made explicit.