I agree that alignment research would suffer during a pause, but I’ve been wondering recently how much of an issue that is. The key point is that capabilities research would also be paused, so it’s not like AI capabilities would be racing ahead of our knowledge on how to control ever more powerful systems. You’d simply be delaying both capabilities and alignment progress.
You might then ask—what’s the point of a pause if alignment research stops? Isn’t the whole point of a pause to figure out alignment?
I’m not sure that’s the whole point of a pause. A pause can also give us time to figure out optimal governance structures whether it be standards, regulations etc. These structures can be very important in reducing x-risk. Even if the U.S. is the only country to pause that still gives us more time, because the U.S. is currently in the lead.
I realise you make other points against a pause (which I think might be valid), but I would welcome thoughts on the ‘having more time for governance’ point specifically.
I agree that alignment research would suffer during a pause, but I’ve been wondering recently how much of an issue that is. The key point is that capabilities research would also be paused, so it’s not like AI capabilities would be racing ahead of our knowledge on how to control ever more powerful systems. You’d simply be delaying both capabilities and alignment progress.
You might then ask—what’s the point of a pause if alignment research stops? Isn’t the whole point of a pause to figure out alignment?
I’m not sure that’s the whole point of a pause. A pause can also give us time to figure out optimal governance structures whether it be standards, regulations etc. These structures can be very important in reducing x-risk. Even if the U.S. is the only country to pause that still gives us more time, because the U.S. is currently in the lead.
I realise you make other points against a pause (which I think might be valid), but I would welcome thoughts on the ‘having more time for governance’ point specifically.