re: “Short stories about AI futures”—you might be heartened to know that the Future of Life institute has commissioned some additional examples of this genre, via their “AI Worldbuilding Competition”! Contestants had to submit two 1000-word short stories describing an “aspirational” future where AI development goes well, plus artwork and short answers to questions about the detailed impacts of AI on different parts of society from 2022 − 2045. Here’s my summary of some of the top entries, and some extra detail on the motivation behind my own, 2nd-place-winning submission (which is heavily themed around how prediction markets and other improvements to institutional decisionmaking could help mitigate AI risk). The Future of Life institute plans to release podcast interviews about some of the winning worldbuilding scenarios, and I think they’re also trying to promote the stories in a few other ways.
But putting aside FLI’s project- (and its focus on optimistic/utopian scenarios), I totally agree that a documentary-style movie trying to tell a realistic story of AI catastrophe could be very informative and very influential, following in the footsteps of nuclear-war “documentaries” like you mentioned. Take the framework of something like gwern’s “Clippy” story, add in some side-plots about how the relevant companies / government agencies / etc behave in the lead-up to the disaster, and weave in a little bit of human drama—done right, it could be an incredible tool for spreading important AI safety concepts in a responsible, high-bandwidth way.
re: “Short stories about AI futures”—you might be heartened to know that the Future of Life institute has commissioned some additional examples of this genre, via their “AI Worldbuilding Competition”! Contestants had to submit two 1000-word short stories describing an “aspirational” future where AI development goes well, plus artwork and short answers to questions about the detailed impacts of AI on different parts of society from 2022 − 2045. Here’s my summary of some of the top entries, and some extra detail on the motivation behind my own, 2nd-place-winning submission (which is heavily themed around how prediction markets and other improvements to institutional decisionmaking could help mitigate AI risk). The Future of Life institute plans to release podcast interviews about some of the winning worldbuilding scenarios, and I think they’re also trying to promote the stories in a few other ways.
But putting aside FLI’s project- (and its focus on optimistic/utopian scenarios), I totally agree that a documentary-style movie trying to tell a realistic story of AI catastrophe could be very informative and very influential, following in the footsteps of nuclear-war “documentaries” like you mentioned. Take the framework of something like gwern’s “Clippy” story, add in some side-plots about how the relevant companies / government agencies / etc behave in the lead-up to the disaster, and weave in a little bit of human drama—done right, it could be an incredible tool for spreading important AI safety concepts in a responsible, high-bandwidth way.