Notably, extinction is narrower than âexistential risk,â as I understand it (because it does not include permanent disempowerment or population decreases), while takeover is broader (because it could theoretically increase human potential).
There are many concepts of existential risk, but I like to think about it as the risk of a major reduction in the expected value of the future. Under this definition and an impartial welfarist view[1], human extinction would not necessarily be an existential catastrophe. Humans, as the species on Earth which arguably has the most control over the future, have contributed to the extinction of many species without causing any meaningful existential risk in the process. So, if advanced AI, as the most powerful entity on Earth, were to cause human extinction, I guess existential risk would be negligible on priors?
Some more related thoughts:
I feel like greater intelligence/ârationality/âpower is correlated (far from perfectly!) with greater ability to contribute to a better world. I think humans are often considered to be the most intelligent/ârational/âpowerful species, and the one with greater ability to contribute to a better world. This suggests advanced AI could have an even greater
If humans were to deliberately cause the extinction of a species of great apes (humans are great apes):
I assume this would increase the welfare of humans or other species.
I suppose there would be an effort to minimise suffering. For example, by using contraceptive methods, or aiming to kill the individuals as painlessly as possible.
If advanced AI was to deliberately cause human extinction, I think humans would be pretty likely to suffer very little in the process.
For instance, humans could be persuaded to continue having fewer children while being provided with material abundance from a booming AI economy. There would be no need to against human will given the availability of superhuman persuasion techniques.
If AIs were to kill all humans, then I assume they would rely on a method not involving mass suffering, because I am expecting advanced AIs to be more altruistic than humans.
Great piece, Zach!
There are many concepts of existential risk, but I like to think about it as the risk of a major reduction in the expected value of the future. Under this definition and an impartial welfarist view[1], human extinction would not necessarily be an existential catastrophe. Humans, as the species on Earth which arguably has the most control over the future, have contributed to the extinction of many species without causing any meaningful existential risk in the process. So, if advanced AI, as the most powerful entity on Earth, were to cause human extinction, I guess existential risk would be negligible on priors?
Some more related thoughts:
I feel like greater intelligence/ârationality/âpower is correlated (far from perfectly!) with greater ability to contribute to a better world. I think humans are often considered to be the most intelligent/ârational/âpowerful species, and the one with greater ability to contribute to a better world. This suggests advanced AI could have an even greater
If humans were to deliberately cause the extinction of a species of great apes (humans are great apes):
I assume this would increase the welfare of humans or other species.
I suppose there would be an effort to minimise suffering. For example, by using contraceptive methods, or aiming to kill the individuals as painlessly as possible.
If advanced AI was to deliberately cause human extinction, I think humans would be pretty likely to suffer very little in the process.
For instance, humans could be persuaded to continue having fewer children while being provided with material abundance from a booming AI economy. There would be no need to against human will given the availability of superhuman persuasion techniques.
If AIs were to kill all humans, then I assume they would rely on a method not involving mass suffering, because I am expecting advanced AIs to be more altruistic than humans.
I strongly endorse expected total hedonistic utilitarianism.