Could realistic depictions of catastrophic AI risks effectively reduce said risks?

Several pieces of fiction, for example the Terminator films, have popularised the idea that advances in AI could pose a serious threat to human civilisation. However, many people who are seriously concerned about issues such as the alignment problem complain that these depictions of AI have been counterproductive as they have given people unrealistic images of the potential threats, thereby making similar risks in real life seem implausible and causing the field of AI safety to be neglected. One way to to try to solve this problem might be to create fiction presenting some of the biggest threats posed by advances in AI (e.g. unaligned superintelligence, perpetual dystopia maintained using AI, etc.) in a very “realistic” manner. If stories could be written which reached a large audience and left a significant fraction of that audience believing that the threats presented in the story were serious issues in the real world then perhaps this could improve support for technical AI safety research or AI policy research.

Could such works successfully capture the imaginations of a large audience? This appears to me to be the biggest hurdle facing a project of this kind. On the one hand, it appears that there is significant public appetite for fiction based on existential risks. At the extreme, Terminator 2 grossed about $520 million in box office figures, presumably meaning that tens of millions of people have seen the film and many more have been influenced by it. Similarly, Don’t Look Up recently gained a lot of traction by satirising indifference towards climate change. However, the most famous examples of course tend to be those which were the most commercially successful and for every successful attempt, there are presumably many examples that few have heard of. Moreover, constraining the work by insisting that the plot be realistic may reduce the likelihood that it reaches a large audience. For example, it might be much harder to write a compelling and entertaining story with this restriction in place.

Even if fiction of this kind were consumed by a lot of people, would many of them come to take AI risks more seriously? It has been argued, for example, that the Terminator films do not in fact portray AI risks unrealistically and that if viewers were left thinking that it’s silly to be concerned about AI then that wasn’t because the film depicted the threats inaccurately. However, even if that is true, it might be possible to carefully design a story which makes the issues clearer to the audience than the Terminator films and other mass media have managed to, although this would constrain the fiction even further, potentially making it less likely to reach a large audience.

Finally, even if someone created fiction which made a large number of people take AI risk more seriously, would this help much to reduce AI risk? I’m pretty sure the answer to this question is yes since it would broaden the pool of talented people who would consider working to mitigate these threats and might otherwise increase support for relevant research but whether or not this would make enough of a difference to justify the resources that would be required by such a project is of course questionable.

Overall, I think it is very unlikely that an endeavour to produce fiction realistically depicting AI risks would justify the associated opportunity cost because it would require a lot of time and money and would be unlikely to have much of an effect for the reasons mentioned above. However, I think there is a small chance that such a project could be very successful. I am writing this on the off chance that others have good ideas on if/​how this could actually be made to work so I would welcome any comments.