Perhaps AGI safety will become associated with one side of the political aisle and the other side will adopt a stance of skepticism toward the risks of AGI. This is what happened with climate change and to some extent with the COVID-19 pandemic, so it could play out here as well.
This indeed seems a plausible risk, which warrants some attention. However, you also write:
Similar to how pandemics were rarely discussed in partisan terms before COVID-19, the current non-partisan discussion of AI alignment seems unlikely to last.
In combination with its context, I interpret this sentence as claiming: “Climate change and COVID-19 became partisan issues; therefore, AI alignment is likely to also become a partisan issue.” That seems to me a strange claim. We could also point to a huge number of issues that haven’t become partisan issues. For example, I’m not aware of risks from earthquakes, asteroids, tsunamis, floods, bushfires, or cyclones becoming partisan issues.
Perhaps there’s a reason that AI alignment is most analogous to a certain class of issues that have tended to become partisan, and less analogous to issues that remained non-partisan. One reason might be the involvement of large companies in AI. But I think we’d need to flesh that or a similar argument out more, and canvass more examples, to conclude that it’s likely that AI alignment will become a partisan issue—and I think pointing to climate change and COVID-19 alone is insufficient.
(To be clear, I’m not saying I’m confident no such argument could be made, and I’d be interested to see someone attempt to make it. Also, I haven’t actually read the Seth Baum papers which this post links to.)
This indeed seems a plausible risk, which warrants some attention. However, you also write:
In combination with its context, I interpret this sentence as claiming: “Climate change and COVID-19 became partisan issues; therefore, AI alignment is likely to also become a partisan issue.” That seems to me a strange claim. We could also point to a huge number of issues that haven’t become partisan issues. For example, I’m not aware of risks from earthquakes, asteroids, tsunamis, floods, bushfires, or cyclones becoming partisan issues.
Perhaps there’s a reason that AI alignment is most analogous to a certain class of issues that have tended to become partisan, and less analogous to issues that remained non-partisan. One reason might be the involvement of large companies in AI. But I think we’d need to flesh that or a similar argument out more, and canvass more examples, to conclude that it’s likely that AI alignment will become a partisan issue—and I think pointing to climate change and COVID-19 alone is insufficient.
(To be clear, I’m not saying I’m confident no such argument could be made, and I’d be interested to see someone attempt to make it. Also, I haven’t actually read the Seth Baum papers which this post links to.)