I think this is an interesting post. I don’t agree with the conclusion, but I think it’s a discussion worth having. In fact, I suspect that this might be a crux for quite some people in the AI safety community. To contribute to the discussion, here are two other perspectives. These are rough thoughts and I could have added a lot more nuance.
Edit: I just noticed that your title includes the word “sentient”. Hence, my second perspective is not as applicable anymore. My own take that I offer at the end seems to hold up nonetheless.
If we develop an ASI that exterminates humans, it will likely also exterminate all other species that might exist in the universe.
Even if one subscribes to utilitarianism, it does not seem clear at all that an ASI would be able to experience any joy or happiness, or that it would be able to create it. Sure, it can accomplish objectives, but one can argue from a strong position that these won’t accomplish any utilitarian goals. Where is the the positive utility here? And even more importantly, how should we frame positive utility in this context?
I think a big reason to not buy your argument stems from the apparent fact that humans are a lot more predictable than an ASI. We know how to work together (at least a bit), we know that we have managed to improve the world throughout the last centuries pretty well. Many people dedicate their life to helping others (such as this lovely community) the higher they are located on Maslow hierarchy. Sure, we have so many flaws (humans), but it seems a lot more plausible to me that we will be able to accomplish full-scale cosmic colonisation that actually maximises positive utility if we don’t go extinct in the process. On the other hand, we don’t even know whether an ASI could create positive utility, or experience it.
According to the “settlement” version of the “Dissolving the Fermi paradox”, we seem to be roughly certain that the average number of other civilizations in the universe is even less than one.
Thus the extermination of other alien civilizations seems to be an equally worthwhile price to pay.
I think this is an interesting post. I don’t agree with the conclusion, but I think it’s a discussion worth having. In fact, I suspect that this might be a crux for quite some people in the AI safety community. To contribute to the discussion, here are two other perspectives. These are rough thoughts and I could have added a lot more nuance.
Edit: I just noticed that your title includes the word “sentient”. Hence, my second perspective is not as applicable anymore. My own take that I offer at the end seems to hold up nonetheless.
If we develop an ASI that exterminates humans, it will likely also exterminate all other species that might exist in the universe.
Even if one subscribes to utilitarianism, it does not seem clear at all that an ASI would be able to experience any joy or happiness, or that it would be able to create it. Sure, it can accomplish objectives, but one can argue from a strong position that these won’t accomplish any utilitarian goals. Where is the the positive utility here? And even more importantly, how should we frame positive utility in this context?
I think a big reason to not buy your argument stems from the apparent fact that humans are a lot more predictable than an ASI. We know how to work together (at least a bit), we know that we have managed to improve the world throughout the last centuries pretty well. Many people dedicate their life to helping others (such as this lovely community) the higher they are located on Maslow hierarchy. Sure, we have so many flaws (humans), but it seems a lot more plausible to me that we will be able to accomplish full-scale cosmic colonisation that actually maximises positive utility if we don’t go extinct in the process. On the other hand, we don’t even know whether an ASI could create positive utility, or experience it.
According to the “settlement” version of the “Dissolving the Fermi paradox”, we seem to be roughly certain that the average number of other civilizations in the universe is even less than one.
Thus the extermination of other alien civilizations seems to be an equally worthwhile price to pay.