Perhaps AGI safety will become associated with one side of the political aisle and the other side will adopt a stance of skepticism toward the risks of AGI. This is what happened with climate change and to some extent with the COVID-19 pandemic, so it could play out here as well.
This indeed seems a plausible risk, which warrants some attention. However, you also write:
Similar to how pandemics were rarely discussed in partisan terms before COVID-19, the current non-partisan discussion of AI alignment seems unlikely to last.
In combination with its context, I interpret this sentence as claiming: āClimate change and COVID-19 became partisan issues; therefore, AI alignment is likely to also become a partisan issue.ā That seems to me a strange claim. We could also point to a huge number of issues that havenāt become partisan issues. For example, Iām not aware of risks from earthquakes, asteroids, tsunamis, floods, bushfires, or cyclones becoming partisan issues.
Perhaps thereās a reason that AI alignment is most analogous to a certain class of issues that have tended to become partisan, and less analogous to issues that remained non-partisan. One reason might be the involvement of large companies in AI. But I think weād need to flesh that or a similar argument out more, and canvass more examples, to conclude that itās likely that AI alignment will become a partisan issueāand I think pointing to climate change and COVID-19 alone is insufficient.
(To be clear, Iām not saying Iām confident no such argument could be made, and Iād be interested to see someone attempt to make it. Also, I havenāt actually read the Seth Baum papers which this post links to.)
This indeed seems a plausible risk, which warrants some attention. However, you also write:
In combination with its context, I interpret this sentence as claiming: āClimate change and COVID-19 became partisan issues; therefore, AI alignment is likely to also become a partisan issue.ā That seems to me a strange claim. We could also point to a huge number of issues that havenāt become partisan issues. For example, Iām not aware of risks from earthquakes, asteroids, tsunamis, floods, bushfires, or cyclones becoming partisan issues.
Perhaps thereās a reason that AI alignment is most analogous to a certain class of issues that have tended to become partisan, and less analogous to issues that remained non-partisan. One reason might be the involvement of large companies in AI. But I think weād need to flesh that or a similar argument out more, and canvass more examples, to conclude that itās likely that AI alignment will become a partisan issueāand I think pointing to climate change and COVID-19 alone is insufficient.
(To be clear, Iām not saying Iām confident no such argument could be made, and Iād be interested to see someone attempt to make it. Also, I havenāt actually read the Seth Baum papers which this post links to.)