This is a great post; I’ll try to change the way I talk about AI risk in the future to follow these tips.
I am reminded of blogger Dynomight’s interesting story about how he initially got a bunch of really hostile reactions to a post about ultrasonic humidifiers & air quality, but was able to lightly reframe things using a more conventional tone and the hostility disappeared, even though the message and vast majority of the content was the same:
Previously my approach was to sort of tackle the reader and scream “HUMIDIFIERS → PARTICLES! [citation] [citation] [citation] [citation]” and “PARTICLES → DEATH! [citation] [citation] [citation]”. I changed it to start by conceding that ultrasonic humidifiers don’t always make particles and it’s not certain those particular particles cause harm, et cetera, but PEER-REVIEWED RESEARCH PAPERS say these things are possible, so it’s worth thinking about.
After making those changes, no one had the same reaction anymore.
In his case, the solution was to add some friendly caveats—personally I think we do this plenty, at least in the semi-formal writing style of most EA Forum posts! But the logic of building “up” from real-world details and extrapolation, rather than building “down” from visions of AI apocalypse (which probably sounds to most people like attempting to justify an arbitrary sci-fi scenario), might be an equally powerful tool for talking about AI risk.
This is a great post; I’ll try to change the way I talk about AI risk in the future to follow these tips.
I am reminded of blogger Dynomight’s interesting story about how he initially got a bunch of really hostile reactions to a post about ultrasonic humidifiers & air quality, but was able to lightly reframe things using a more conventional tone and the hostility disappeared, even though the message and vast majority of the content was the same:
In his case, the solution was to add some friendly caveats—personally I think we do this plenty, at least in the semi-formal writing style of most EA Forum posts! But the logic of building “up” from real-world details and extrapolation, rather than building “down” from visions of AI apocalypse (which probably sounds to most people like attempting to justify an arbitrary sci-fi scenario), might be an equally powerful tool for talking about AI risk.