Failed continuation is where humanity doesn’t go extinct, but (in Ord’s phrase) “the destruction of humanity’s longterm potential” still occurs in some other way (and thus there’s still an existential catastrophe).
And “destruction of humanity’s longterm potential” in turn essentially means “preventing the possibility of humanity ever bringing into existence something close to the best possible future”. (Thus, existential risks are not just about humanity.)
It’s conceivable that vast nonhuman suffering could be a feature of even the best possible future, partly because both “vast” and “suffering” are vague terms. But I mean something like astronomical amounts of suffering among moral patients. (I hadn’t really noticed that the phrase I used in the diagram didn’t actually make that clear.) And it seems to me quite likely that a future containing that is not close to the best possible future.
Thus, it seems to me likely that locking in such a feature of the future is tantamount to preventing us ever achieving something close to the best future possible.
Does that address your question? (Which is a fair question, in part because it turns out my language wasn’t especially precise.)
ETA: I’m also imagining that this scenario does not involve (premature) human extinction, which is another thing I hadn’t made explicit.
Why is “people decide to lock in vast nonhuman suffering” an example of failed continuation in the last diagram?
Failed continuation is where humanity doesn’t go extinct, but (in Ord’s phrase) “the destruction of humanity’s longterm potential” still occurs in some other way (and thus there’s still an existential catastrophe).
And “destruction of humanity’s longterm potential” in turn essentially means “preventing the possibility of humanity ever bringing into existence something close to the best possible future”. (Thus, existential risks are not just about humanity.)
It’s conceivable that vast nonhuman suffering could be a feature of even the best possible future, partly because both “vast” and “suffering” are vague terms. But I mean something like astronomical amounts of suffering among moral patients. (I hadn’t really noticed that the phrase I used in the diagram didn’t actually make that clear.) And it seems to me quite likely that a future containing that is not close to the best possible future.
Thus, it seems to me likely that locking in such a feature of the future is tantamount to preventing us ever achieving something close to the best future possible.
Does that address your question? (Which is a fair question, in part because it turns out my language wasn’t especially precise.)
ETA: I’m also imagining that this scenario does not involve (premature) human extinction, which is another thing I hadn’t made explicit.