I agree it’s important to think about the perceived opportunity cost as well, and that’s a large part of why I’m uncertain. I probably should have said that in the post.
I’d still guess that overall the increased clarity on risks will be the bigger factor—it seems to me that risk aversion is a much larger driver of policy than worries about economic opportunity cost (see e.g. COVID lockdowns). I would be more worried about powerful AI systems being seen as integral to national security; my understanding is that national security concerns drive a lot of policy. (But this could potentially be overcome with international agreements.)
I agree it’s important to think about the perceived opportunity cost as well, and that’s a large part of why I’m uncertain. I probably should have said that in the post.
I’d still guess that overall the increased clarity on risks will be the bigger factor—it seems to me that risk aversion is a much larger driver of policy than worries about economic opportunity cost (see e.g. COVID lockdowns). I would be more worried about powerful AI systems being seen as integral to national security; my understanding is that national security concerns drive a lot of policy. (But this could potentially be overcome with international agreements.)