Because their leaders are openly enthusiastic about AI regulation and saying things like “better that the standard is set by American companies that can work with our government to shape these models on important issues” or “we need a referee”, rather than arguing that their tech is too far away from AGI to need any regulation or arguing the risks of AI are greatly exaggerated, as you might expect if they saw AI safety lobbying as a threat rather than an opportunity.
Sure, but there are many alternative explanations:
There is internal and external pressure to avoid downplaying AI safety.
Regulation is inevitable, so it would be better to ensure that you can at least influence it somewhat. Purely fighting against regulation might go poorly for you.
The leaders care at least a bit about AI safety either out of a bit of altruism or self interest. (Or at least aren’t constantly manipulative to such an extent that they choose all words to maximize their power.)
I don’t disagree that these are also factors, but if tech leaders are pretty openly stating they want the regulation to happen and they want to guide the regulators, I think it’s accurate to say that they’re currently more motivated to achieve regulatory capture (for whatever reason) than they are to ensure that x-risk concerns don’t become a powerful political argument as suggested by the OP, which was the fairly modest claim I made.
(Obviously far more explicit and cynical claims about, say, Sam Altman’s intentions in founding OpenAI exist, but the point I made doesn’t rest on them)
Because their leaders are openly enthusiastic about AI regulation and saying things like “better that the standard is set by American companies that can work with our government to shape these models on important issues” or “we need a referee”, rather than arguing that their tech is too far away from AGI to need any regulation or arguing the risks of AI are greatly exaggerated, as you might expect if they saw AI safety lobbying as a threat rather than an opportunity.
Sure, but there are many alternative explanations:
There is internal and external pressure to avoid downplaying AI safety.
Regulation is inevitable, so it would be better to ensure that you can at least influence it somewhat. Purely fighting against regulation might go poorly for you.
The leaders care at least a bit about AI safety either out of a bit of altruism or self interest. (Or at least aren’t constantly manipulative to such an extent that they choose all words to maximize their power.)
I don’t disagree that these are also factors, but if tech leaders are pretty openly stating they want the regulation to happen and they want to guide the regulators, I think it’s accurate to say that they’re currently more motivated to achieve regulatory capture (for whatever reason) than they are to ensure that x-risk concerns don’t become a powerful political argument as suggested by the OP, which was the fairly modest claim I made.
(Obviously far more explicit and cynical claims about, say, Sam Altman’s intentions in founding OpenAI exist, but the point I made doesn’t rest on them)