AGI is something that several well-funded companies are already trying hard to do. I don’t think that was ever true of human cloning (though I could be wrong).
Eugenics was quite popular in polite society, at least until the Nazis came along.
The underlying tech that allows you to build AGI is shared by other things that don’t seem to have any taboos at all. For example, GPUs are needed for video games. The taboo would need to be strong enough that we’d need to also ban a ton of other things that people currently think are fine.
You only need to ban huge concentrations of GPUs. At least initially. By the time training run FLOP limits are reduced sufficiently because of algorithmic improvement, we will probably have arrested further hardware development as a measure to deal with it. So individual consumers would not be impacted for a long time (plenty of time for a taboo to settle into acceptance of reduced personal compute allowance).
AGI is just software, and seems harder to build a taboo around compared to human cloning. I don’t think many people have a disgust reaction to GPT-4, for example.
They might once multimodal foundation models are controlling robots that can do their jobs (a year or two’s time?)
Finally, I doubt there will ever be a complete global consensus that AI is existentially unsafe, since the arguments are speculative, and even unaligned AI will appear “aligned” in the short term if only to trick us.
Yes, this is a massive problem. It’s like asking for a global lockdown to prevent Covid spread in December 2019, before the bodies started piling up. Let’s hope it doesn’t come to needing a “warning shot” (global catastrophe with many casualties) before we get the necessary regulation of AI. Especially since we may well not get one and instead face unstoppable extinction.
Eugenics was quite popular in polite society, at least until the Nazis came along.
You only need to ban huge concentrations of GPUs. At least initially. By the time training run FLOP limits are reduced sufficiently because of algorithmic improvement, we will probably have arrested further hardware development as a measure to deal with it. So individual consumers would not be impacted for a long time (plenty of time for a taboo to settle into acceptance of reduced personal compute allowance).
They might once multimodal foundation models are controlling robots that can do their jobs (a year or two’s time?)
Yes, this is a massive problem. It’s like asking for a global lockdown to prevent Covid spread in December 2019, before the bodies started piling up. Let’s hope it doesn’t come to needing a “warning shot” (global catastrophe with many casualties) before we get the necessary regulation of AI. Especially since we may well not get one and instead face unstoppable extinction.