FWIW your claim doesn’t contradict the main point here, which is that AI governance is a better option to prioritize. The OP says it’s because alignment is hard, you say it’s because alignment is the default, but both point to the same conclusion in this specific case
While it does not contradict the main point in the post, I claim it does affect what type of governance work should be pursued. If AI alignment is very difficult, then it is probably most important to do governance work that helps ensure that AI alignment is solved—for example by ensuring that we have adequate mechanisms for delaying AI if we cannot be reasonably confident about the alignment of AI systems.
On the other hand, if AI alignment is very easy, then it is probably more important to do governance work that operates under that assumption. This could look like making sure that AIs are not misused by rogue actors, or making sure that AIs are not used in a way that makes a catastrophic war more likely.
FWIW your claim doesn’t contradict the main point here, which is that AI governance is a better option to prioritize. The OP says it’s because alignment is hard, you say it’s because alignment is the default, but both point to the same conclusion in this specific case
While it does not contradict the main point in the post, I claim it does affect what type of governance work should be pursued. If AI alignment is very difficult, then it is probably most important to do governance work that helps ensure that AI alignment is solved—for example by ensuring that we have adequate mechanisms for delaying AI if we cannot be reasonably confident about the alignment of AI systems.
On the other hand, if AI alignment is very easy, then it is probably more important to do governance work that operates under that assumption. This could look like making sure that AIs are not misused by rogue actors, or making sure that AIs are not used in a way that makes a catastrophic war more likely.
Makes sense!