One of the best ways to prevent the creation of a misaligned, “unfriendly” AGI (or to limit its power if it is created) is to build an aligned, “friendly” AGI first.
The alignment problem might turn out to be much easier than the biggest pessimists currently believe. It isn’t self-evident that alignment is super hard. A lot of the arguments that alignment is super hard are highly theoretical and not based on empirical evidence. GPT-4, for example, seems to be aligned and “friendly”.
“Friendly” AGI could mitigate all sorts of other global catastrophic risks like asteroids and pandemics. It could also do things like help end factory farming — which is quite arguably a global catastrophe — by accelerating the kind of research New Harvest funds. On top of that, it could help end global poverty — another global catastrophe — by accelerating global economic growth.
Pausing or stopping AI development globally might just be impossible or nearly impossible. It certainly seems extremely hard.
Even if it could be achieved and enforced, a global ban on AI development would create a situation where the least conscientious and most dangerous actors — those violating international law — would be the most likely to create AGI. This would perversely increase existential risk.
You have certainly given me some wonderful food for thought!
To me, (5) and (6) seem like the most relevant points here. If AI development can’t be realistically stopped (or if the chance of stopping it is so low that it isn’t worth the effort), then you’re right that, paradoxically, bans on AI development can increase x-risk by “driving it underground.”
(2) is also intriguing to me. A biological, engineered superintelligence (especially with an organic substrate) is a very interesting concept, but seems so far away technologically it may as well be sci-fi. It also raises a lot of ethical questions for me, since its development process will probably involve great harm to animal subjects, who have interests in not suffering.
There’s a few things to consider.
One of the best ways to prevent the creation of a misaligned, “unfriendly” AGI (or to limit its power if it is created) is to build an aligned, “friendly” AGI first.
Similarly, biological superintelligence could prevent or provide protection from a misaligned AGI.
The alignment problem might turn out to be much easier than the biggest pessimists currently believe. It isn’t self-evident that alignment is super hard. A lot of the arguments that alignment is super hard are highly theoretical and not based on empirical evidence. GPT-4, for example, seems to be aligned and “friendly”.
“Friendly” AGI could mitigate all sorts of other global catastrophic risks like asteroids and pandemics. It could also do things like help end factory farming — which is quite arguably a global catastrophe — by accelerating the kind of research New Harvest funds. On top of that, it could help end global poverty — another global catastrophe — by accelerating global economic growth.
Pausing or stopping AI development globally might just be impossible or nearly impossible. It certainly seems extremely hard.
Even if it could be achieved and enforced, a global ban on AI development would create a situation where the least conscientious and most dangerous actors — those violating international law — would be the most likely to create AGI. This would perversely increase existential risk.
You have certainly given me some wonderful food for thought!
To me, (5) and (6) seem like the most relevant points here. If AI development can’t be realistically stopped (or if the chance of stopping it is so low that it isn’t worth the effort), then you’re right that, paradoxically, bans on AI development can increase x-risk by “driving it underground.”
(2) is also intriguing to me. A biological, engineered superintelligence (especially with an organic substrate) is a very interesting concept, but seems so far away technologically it may as well be sci-fi. It also raises a lot of ethical questions for me, since its development process will probably involve great harm to animal subjects, who have interests in not suffering.
Further away and more sci-fi than AGI?