You have certainly given me some wonderful food for thought!
To me, (5) and (6) seem like the most relevant points here. If AI development can’t be realistically stopped (or if the chance of stopping it is so low that it isn’t worth the effort), then you’re right that, paradoxically, bans on AI development can increase x-risk by “driving it underground.”
(2) is also intriguing to me. A biological, engineered superintelligence (especially with an organic substrate) is a very interesting concept, but seems so far away technologically it may as well be sci-fi. It also raises a lot of ethical questions for me, since its development process will probably involve great harm to animal subjects, who have interests in not suffering.
You have certainly given me some wonderful food for thought!
To me, (5) and (6) seem like the most relevant points here. If AI development can’t be realistically stopped (or if the chance of stopping it is so low that it isn’t worth the effort), then you’re right that, paradoxically, bans on AI development can increase x-risk by “driving it underground.”
(2) is also intriguing to me. A biological, engineered superintelligence (especially with an organic substrate) is a very interesting concept, but seems so far away technologically it may as well be sci-fi. It also raises a lot of ethical questions for me, since its development process will probably involve great harm to animal subjects, who have interests in not suffering.
Further away and more sci-fi than AGI?