Executive summary: A study of 1,200 Americans found that communicating AI existential risk with a fearful tone raises risk perceptions the most, while a more hopeful appeal creates greater support for AI regulation.
Key points:
Fear appeals significantly increased perceived risk levels of AI existential and catastrophic risk compared to control.
Hope and mixed emotional appeals also raised risk perception, but less so than pure fear appeals.
All emotional appeals increased support for AI regulation over control, with hope having the greatest effect by also boosting perceived regulation efficacy.
Fear had the clearest positive impact on support for pausing AI development for 6 months.
Both fear and hope appeals made people more likely to seek more info on AI risks and discuss them compared to control.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, andcontact us if you have feedback.
Executive summary: A study of 1,200 Americans found that communicating AI existential risk with a fearful tone raises risk perceptions the most, while a more hopeful appeal creates greater support for AI regulation.
Key points:
Fear appeals significantly increased perceived risk levels of AI existential and catastrophic risk compared to control.
Hope and mixed emotional appeals also raised risk perception, but less so than pure fear appeals.
All emotional appeals increased support for AI regulation over control, with hope having the greatest effect by also boosting perceived regulation efficacy.
Fear had the clearest positive impact on support for pausing AI development for 6 months.
Both fear and hope appeals made people more likely to seek more info on AI risks and discuss them compared to control.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.