You know, as far as seeing ourselves on a path to doom, I don’t see why development of a superintelligent rogue AI isn’t treated like development of a superweapon.
Because distinguishing it from benign, not superintelligent AI is really hard.
So you are the FBI, you have a big computer running some code. You can’t tell if its a rouge superintelligence or the next DALL-E by looking at the outputs. A rouge superintelligence will trick you until its too late. Once its run at all on a computer that isn’t in a sandboxed bunker its probably too late. So you have to notice people writing code, and read that code before its run. There are many smart people writing code all the time. That code is often illegible spaghetti. Maybe the person writing the code will know, or at least suspect, that it might be a rouge superintelligence. Maybe not.
Lots of computer scientists are in practice rushing to develop self driving cars, the next GPT. All sorts of AI services. The economic incentive is strong.
Because distinguishing it from benign, not superintelligent AI is really hard.
So you are the FBI, you have a big computer running some code. You can’t tell if its a rouge superintelligence or the next DALL-E by looking at the outputs. A rouge superintelligence will trick you until its too late. Once its run at all on a computer that isn’t in a sandboxed bunker its probably too late. So you have to notice people writing code, and read that code before its run. There are many smart people writing code all the time. That code is often illegible spaghetti. Maybe the person writing the code will know, or at least suspect, that it might be a rouge superintelligence. Maybe not.
Lots of computer scientists are in practice rushing to develop self driving cars, the next GPT. All sorts of AI services. The economic incentive is strong.