It could involve something like locking in a future thatâs better for all humans than the current world, with no âextraâ human death involved (i.e., maybe people still die of old age but not in a sudden catastrophe), but with us now being blocked from ever creating anywhere near as much value as we couldâve. This might be a âdesired dystopiaâ, in Ordâs terms. For example, we might forever limit ourselves to the Earth but somehow maintain it far beyond its ânatural lifespanâ, or we might create vast suffering among non-humans.
I mention this here for two reasons:
Such an existential catastrophe could involve humans still being around, and therefore affecting the probability and nature of future evolution of moral agents on Earth or the evolution and development of intelligent life elsewhere. This might be similar to the scenario you note where a misaligned AGI could prevent future evolution of intelligent life. But this is another pathway to that sort of outcome.
And in this scenario, humanity might also even increase the chances of such evolution, for some reason, such as if we intentionally try to bring that about out of curiosity.
I donât really know how this sort of thing would affect analyses, but it seems relevant.
You write âFor probabilities of human extinction from each event I use the probabilities given by Toby Ord in The Precipice.â But Ord was estimating existential risk, not extinction risk. (Using Ordâs estimates in this way just for illustration purposes seems fine, but it seems worth noting that thatâs not what he was estimating the odds of.)
This is a really good point and something I was briefly aware of when writing but did not take the time to consider fully. Iâve definitely conflated extinction risk with existential risk. I hope that when restricting everything I said just to extinction risk, the conclusion still holds.
A scenario where humanity establishes itâs own dystopia definitely seems comparable to the misaligned AGI scenario. Any âlocked-inâ totalitarian regime would probably prevent the evolution of other intelligent life. This could cause us to increase the risk posed by such dystopian scenarios and weigh these risks more highly.
I think the core points in your article work in relation to both extinction risk and existential risk. This is partly because extinction is one of the main types of existential catastrophe, and partly because some other existential catastrophes still theoretically allow for future evolution of intelligent life (just as some extinction scenarios would). So this doesnât undercut your postâI just wanted to raise the distinction as I think itâs valuable to have in mind.
A scenario where humanity establishes itâs own dystopia definitely seems comparable to the misaligned AGI scenario. Any âlocked-inâ totalitarian regime would probably prevent the evolution of other intelligent life. This could cause us to increase the risk posed by such dystopian scenarios and weigh these risks more highly.
This seems plausible. But it also seems plausible there could be future evolution of other intelligent life in a scenario where humanity sticks around. One reason is that these non-extinction lock-ins donât have to look like jack-booted horrible power-hungry totalitarians. It could be idyllic in many senses, or at least as far as the humans involved perceive it, and yet irreversibly prevent us achieving anything close to the best future possible.
For a random, very speculative example, I wouldnât be insanely shocked if humanity ends up deciding that allowing nature to run its course is extremely valuable, so we lock-in some sort of situation of us being caretakers and causing minimal disruption, with this preventing us from ever expanding through the stars but allowing for whatever evolution might happen on Earth. This could perhaps be a âdesired dystopiaâ (if we could otherwise have done something far better), even if all the humans involved are happy and stay around for a very very long time.
Thanks for the elaboration. I havenât given much consideration to âdesired dystopiasâ before and they are really interesting to consider.
Another dystopian scenario to consider could be one in which humanity âstrandsâ itself on Earth through resource depletion. This could also prevent future life from achieving a grand future.
I think thatâd indeed probably prevent evolution of other intelligent life on Earth, or prevent it achieving a grand future. But at first glance, this looks to me like a âpremature extinctionâ scenario, rather than a clear-cut âdystopiaâ. This is because humanity would still be wiped out (when the Earth becomes uninhabitable) earlier than the point at which extinction is inevitable no matter what we do (perhaps this point would be the heat death of the universe).
But Iâd also see it as fair enough if someone wanted to call that scenario more a âdystopiaâ than a standard âextinction eventâ. And I donât think much turns on which label we choose, as long as we all know what we mean.
(By the way, I take the term âdesired dystopiaâ from The Precipice.)
Something that seems worth noting is that an existential catastrophe (or âhuman existential catastropheâ) need involve human extinction, nor even âkilling humans to the extent that human civilisation never achieves a grand futureâ.
It could involve something like locking in a future thatâs better for all humans than the current world, with no âextraâ human death involved (i.e., maybe people still die of old age but not in a sudden catastrophe), but with us now being blocked from ever creating anywhere near as much value as we couldâve. This might be a âdesired dystopiaâ, in Ordâs terms. For example, we might forever limit ourselves to the Earth but somehow maintain it far beyond its ânatural lifespanâ, or we might create vast suffering among non-humans.
I mention this here for two reasons:
Such an existential catastrophe could involve humans still being around, and therefore affecting the probability and nature of future evolution of moral agents on Earth or the evolution and development of intelligent life elsewhere. This might be similar to the scenario you note where a misaligned AGI could prevent future evolution of intelligent life. But this is another pathway to that sort of outcome.
And in this scenario, humanity might also even increase the chances of such evolution, for some reason, such as if we intentionally try to bring that about out of curiosity.
I donât really know how this sort of thing would affect analyses, but it seems relevant.
You write âFor probabilities of human extinction from each event I use the probabilities given by Toby Ord in The Precipice.â But Ord was estimating existential risk, not extinction risk. (Using Ordâs estimates in this way just for illustration purposes seems fine, but it seems worth noting that thatâs not what he was estimating the odds of.)
Hi Michael, thanks for this comment!
This is a really good point and something I was briefly aware of when writing but did not take the time to consider fully. Iâve definitely conflated extinction risk with existential risk. I hope that when restricting everything I said just to extinction risk, the conclusion still holds.
A scenario where humanity establishes itâs own dystopia definitely seems comparable to the misaligned AGI scenario. Any âlocked-inâ totalitarian regime would probably prevent the evolution of other intelligent life. This could cause us to increase the risk posed by such dystopian scenarios and weigh these risks more highly.
I think the core points in your article work in relation to both extinction risk and existential risk. This is partly because extinction is one of the main types of existential catastrophe, and partly because some other existential catastrophes still theoretically allow for future evolution of intelligent life (just as some extinction scenarios would). So this doesnât undercut your postâI just wanted to raise the distinction as I think itâs valuable to have in mind.
This seems plausible. But it also seems plausible there could be future evolution of other intelligent life in a scenario where humanity sticks around. One reason is that these non-extinction lock-ins donât have to look like jack-booted horrible power-hungry totalitarians. It could be idyllic in many senses, or at least as far as the humans involved perceive it, and yet irreversibly prevent us achieving anything close to the best future possible.
For a random, very speculative example, I wouldnât be insanely shocked if humanity ends up deciding that allowing nature to run its course is extremely valuable, so we lock-in some sort of situation of us being caretakers and causing minimal disruption, with this preventing us from ever expanding through the stars but allowing for whatever evolution might happen on Earth. This could perhaps be a âdesired dystopiaâ (if we could otherwise have done something far better), even if all the humans involved are happy and stay around for a very very long time.
Thanks for the elaboration. I havenât given much consideration to âdesired dystopiasâ before and they are really interesting to consider.
Another dystopian scenario to consider could be one in which humanity âstrandsâ itself on Earth through resource depletion. This could also prevent future life from achieving a grand future.
I think thatâd indeed probably prevent evolution of other intelligent life on Earth, or prevent it achieving a grand future. But at first glance, this looks to me like a âpremature extinctionâ scenario, rather than a clear-cut âdystopiaâ. This is because humanity would still be wiped out (when the Earth becomes uninhabitable) earlier than the point at which extinction is inevitable no matter what we do (perhaps this point would be the heat death of the universe).
But Iâd also see it as fair enough if someone wanted to call that scenario more a âdystopiaâ than a standard âextinction eventâ. And I donât think much turns on which label we choose, as long as we all know what we mean.
(By the way, I take the term âdesired dystopiaâ from The Precipice.)