Assuming the world accepts the reason for the pause being that the default outcome of AGI is extinction, then this wouldn’t be necessary. A strong enough taboo would emerge around AGI development. How many human clones have ever been born in our current (non-police-state) world?
This won’t address all the arguments in your comment but I have a few things to say in response to this point.
I agree it’s possible that we could just get a very long taboo on AI and halt its development for many decades without a world government to enforce the ban. That doesn’t seem out of the question.
However, it also doesn’t seem probable to me. Here are my reasons:
AGI is something that several well-funded companies are already trying hard to do. I don’t think that was ever true of human cloning (though I could be wrong).
I looked it up and my impression is that it might cost tens of millions of dollars to clone a single human, whereas in the post I argued that AGI will eventually be possible to train with only about 1 million dollars. More importantly, after that, you don’t need to train the AI again. You can just copy the AGI to other hardware. Therefore, it seems that you might really only need one rich person to do it once to get the benefits. That seems like a much lower threshold than human cloning, although I don’t know all the details.
The payoff for building (aligned) AGI is probably much greater than human cloning, and it also comes much sooner.
The underlying tech that allows you to build AGI is shared by other things that don’t seem to have any taboos at all. For example, GPUs are needed for video games. The taboo would need to be strong enough that we’d need to also ban a ton of other things that people currently think are fine.
AGI is just software, and seems harder to build a taboo around compared to human cloning. I don’t think many people have a disgust reaction to GPT-4, for example.
Finally, I doubt there will ever be a complete global consensus that AI is existentially unsafe, since the arguments are speculative, and even unaligned AI will appear “aligned” in the short term if only to trick us. The idea that unaligned AIs might fool us is widely conceded among AI safety researchers, and so I suspect you agree too.
AGI is something that several well-funded companies are already trying hard to do. I don’t think that was ever true of human cloning (though I could be wrong).
Eugenics was quite popular in polite society, at least until the Nazis came along.
The underlying tech that allows you to build AGI is shared by other things that don’t seem to have any taboos at all. For example, GPUs are needed for video games. The taboo would need to be strong enough that we’d need to also ban a ton of other things that people currently think are fine.
You only need to ban huge concentrations of GPUs. At least initially. By the time training run FLOP limits are reduced sufficiently because of algorithmic improvement, we will probably have arrested further hardware development as a measure to deal with it. So individual consumers would not be impacted for a long time (plenty of time for a taboo to settle into acceptance of reduced personal compute allowance).
AGI is just software, and seems harder to build a taboo around compared to human cloning. I don’t think many people have a disgust reaction to GPT-4, for example.
They might once multimodal foundation models are controlling robots that can do their jobs (a year or two’s time?)
Finally, I doubt there will ever be a complete global consensus that AI is existentially unsafe, since the arguments are speculative, and even unaligned AI will appear “aligned” in the short term if only to trick us.
Yes, this is a massive problem. It’s like asking for a global lockdown to prevent Covid spread in December 2019, before the bodies started piling up. Let’s hope it doesn’t come to needing a “warning shot” (global catastrophe with many casualties) before we get the necessary regulation of AI. Especially since we may well not get one and instead face unstoppable extinction.
This won’t address all the arguments in your comment but I have a few things to say in response to this point.
I agree it’s possible that we could just get a very long taboo on AI and halt its development for many decades without a world government to enforce the ban. That doesn’t seem out of the question.
However, it also doesn’t seem probable to me. Here are my reasons:
AGI is something that several well-funded companies are already trying hard to do. I don’t think that was ever true of human cloning (though I could be wrong).
I looked it up and my impression is that it might cost tens of millions of dollars to clone a single human, whereas in the post I argued that AGI will eventually be possible to train with only about 1 million dollars. More importantly, after that, you don’t need to train the AI again. You can just copy the AGI to other hardware. Therefore, it seems that you might really only need one rich person to do it once to get the benefits. That seems like a much lower threshold than human cloning, although I don’t know all the details.
The payoff for building (aligned) AGI is probably much greater than human cloning, and it also comes much sooner.
The underlying tech that allows you to build AGI is shared by other things that don’t seem to have any taboos at all. For example, GPUs are needed for video games. The taboo would need to be strong enough that we’d need to also ban a ton of other things that people currently think are fine.
AGI is just software, and seems harder to build a taboo around compared to human cloning. I don’t think many people have a disgust reaction to GPT-4, for example.
Finally, I doubt there will ever be a complete global consensus that AI is existentially unsafe, since the arguments are speculative, and even unaligned AI will appear “aligned” in the short term if only to trick us. The idea that unaligned AIs might fool us is widely conceded among AI safety researchers, and so I suspect you agree too.
Eugenics was quite popular in polite society, at least until the Nazis came along.
You only need to ban huge concentrations of GPUs. At least initially. By the time training run FLOP limits are reduced sufficiently because of algorithmic improvement, we will probably have arrested further hardware development as a measure to deal with it. So individual consumers would not be impacted for a long time (plenty of time for a taboo to settle into acceptance of reduced personal compute allowance).
They might once multimodal foundation models are controlling robots that can do their jobs (a year or two’s time?)
Yes, this is a massive problem. It’s like asking for a global lockdown to prevent Covid spread in December 2019, before the bodies started piling up. Let’s hope it doesn’t come to needing a “warning shot” (global catastrophe with many casualties) before we get the necessary regulation of AI. Especially since we may well not get one and instead face unstoppable extinction.