Executive summary: Some current AI governance advocacy strategies may be net-negative and counterproductive for preventing AI existential risk. Advocates should ensure their arguments directly address AI safety concerns rather than using indirect tactics.
Key points:
Advocating for AI regulation without clearly explaining x-risk concerns can lead to ineffective policies that don’t prevent catastrophe.
Portraying AI capabilities as threats could incentivize governments to invest in dangerous AI races.
Overstating AI threats without expertise can undermine an advocate’s credibility on addressing real risks.
Advocates should directly explain the x-risk problem and propose solutions tailored to it.
Slowing AI progress is an insufficient goal; the aim should be preventing existential catastrophe.
Arguments for regulation should be honest, not tactical, and consider potential pitfalls via premortems.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, andcontact us if you have feedback.
Back in January, Michael Cohen talked at the House of Commons about the possibility of AI killing everyone. At this point, when policymakers want to understand the problem and turn to you, downplaying x-risk doesn’t make them listen to you more; it makes them less worried and more dismissive. I think a lot of AI governance people/think tanks haven’t updated on this.
Executive summary: Some current AI governance advocacy strategies may be net-negative and counterproductive for preventing AI existential risk. Advocates should ensure their arguments directly address AI safety concerns rather than using indirect tactics.
Key points:
Advocating for AI regulation without clearly explaining x-risk concerns can lead to ineffective policies that don’t prevent catastrophe.
Portraying AI capabilities as threats could incentivize governments to invest in dangerous AI races.
Overstating AI threats without expertise can undermine an advocate’s credibility on addressing real risks.
Advocates should directly explain the x-risk problem and propose solutions tailored to it.
Slowing AI progress is an insufficient goal; the aim should be preventing existential catastrophe.
Arguments for regulation should be honest, not tactical, and consider potential pitfalls via premortems.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Back in January, Michael Cohen talked at the House of Commons about the possibility of AI killing everyone. At this point, when policymakers want to understand the problem and turn to you, downplaying x-risk doesn’t make them listen to you more; it makes them less worried and more dismissive. I think a lot of AI governance people/think tanks haven’t updated on this.