I have the impression that one of the reasons for the focus on technical AI is the fact that once you succeed in aligning an AI, you expect it to perform a pivotal act, e.g. burn out all the gpus on earth. To achieve this pivotal act, it seems that going through AI governance is not really necessary?
You’re right in that those situations aren’t impossible, but also governance doesn’t have to be an end goal but a process. Even helping to govern current AI efforts will shape the field, much the same kind of attitude regulation has with the nuclear field.
I have the impression that one of the reasons for the focus on technical AI is the fact that once you succeed in aligning an AI, you expect it to perform a pivotal act, e.g. burn out all the gpus on earth. To achieve this pivotal act, it seems that going through AI governance is not really necessary?
But yes, it does seem to be a bit of a stretch
You’re right in that those situations aren’t impossible, but also governance doesn’t have to be an end goal but a process. Even helping to govern current AI efforts will shape the field, much the same kind of attitude regulation has with the nuclear field.