I think the core points in your article work in relation to both extinction risk and existential risk. This is partly because extinction is one of the main types of existential catastrophe, and partly because some other existential catastrophes still theoretically allow for future evolution of intelligent life (just as some extinction scenarios would). So this doesn’t undercut your post—I just wanted to raise the distinction as I think it’s valuable to have in mind.
A scenario where humanity establishes it’s own dystopia definitely seems comparable to the misaligned AGI scenario. Any “locked-in” totalitarian regime would probably prevent the evolution of other intelligent life. This could cause us to increase the risk posed by such dystopian scenarios and weigh these risks more highly.
This seems plausible. But it also seems plausible there could be future evolution of other intelligent life in a scenario where humanity sticks around. One reason is that these non-extinction lock-ins don’t have to look like jack-booted horrible power-hungry totalitarians. It could be idyllic in many senses, or at least as far as the humans involved perceive it, and yet irreversibly prevent us achieving anything close to the best future possible.
For a random, very speculative example, I wouldn’t be insanely shocked if humanity ends up deciding that allowing nature to run its course is extremely valuable, so we lock-in some sort of situation of us being caretakers and causing minimal disruption, with this preventing us from ever expanding through the stars but allowing for whatever evolution might happen on Earth. This could perhaps be a “desired dystopia” (if we could otherwise have done something far better), even if all the humans involved are happy and stay around for a very very long time.
Thanks for the elaboration. I haven’t given much consideration to “desired dystopias” before and they are really interesting to consider.
Another dystopian scenario to consider could be one in which humanity “strands” itself on Earth through resource depletion. This could also prevent future life from achieving a grand future.
I think that’d indeed probably prevent evolution of other intelligent life on Earth, or prevent it achieving a grand future. But at first glance, this looks to me like a “premature extinction” scenario, rather than a clear-cut “dystopia”. This is because humanity would still be wiped out (when the Earth becomes uninhabitable) earlier than the point at which extinction is inevitable no matter what we do (perhaps this point would be the heat death of the universe).
But I’d also see it as fair enough if someone wanted to call that scenario more a “dystopia” than a standard “extinction event”. And I don’t think much turns on which label we choose, as long as we all know what we mean.
(By the way, I take the term “desired dystopia” from The Precipice.)
I think the core points in your article work in relation to both extinction risk and existential risk. This is partly because extinction is one of the main types of existential catastrophe, and partly because some other existential catastrophes still theoretically allow for future evolution of intelligent life (just as some extinction scenarios would). So this doesn’t undercut your post—I just wanted to raise the distinction as I think it’s valuable to have in mind.
This seems plausible. But it also seems plausible there could be future evolution of other intelligent life in a scenario where humanity sticks around. One reason is that these non-extinction lock-ins don’t have to look like jack-booted horrible power-hungry totalitarians. It could be idyllic in many senses, or at least as far as the humans involved perceive it, and yet irreversibly prevent us achieving anything close to the best future possible.
For a random, very speculative example, I wouldn’t be insanely shocked if humanity ends up deciding that allowing nature to run its course is extremely valuable, so we lock-in some sort of situation of us being caretakers and causing minimal disruption, with this preventing us from ever expanding through the stars but allowing for whatever evolution might happen on Earth. This could perhaps be a “desired dystopia” (if we could otherwise have done something far better), even if all the humans involved are happy and stay around for a very very long time.
Thanks for the elaboration. I haven’t given much consideration to “desired dystopias” before and they are really interesting to consider.
Another dystopian scenario to consider could be one in which humanity “strands” itself on Earth through resource depletion. This could also prevent future life from achieving a grand future.
I think that’d indeed probably prevent evolution of other intelligent life on Earth, or prevent it achieving a grand future. But at first glance, this looks to me like a “premature extinction” scenario, rather than a clear-cut “dystopia”. This is because humanity would still be wiped out (when the Earth becomes uninhabitable) earlier than the point at which extinction is inevitable no matter what we do (perhaps this point would be the heat death of the universe).
But I’d also see it as fair enough if someone wanted to call that scenario more a “dystopia” than a standard “extinction event”. And I don’t think much turns on which label we choose, as long as we all know what we mean.
(By the way, I take the term “desired dystopia” from The Precipice.)