Why I think the EA Community should write more fiction

What if the Precipice (Toby Ord’s book on existential risks) was a science fiction anthology?

This is just an idea that occurred to me recently, I’d be interested to hear what people think. While I’m not going to say that writing EA-aligned fiction is the highest impact thing you can do, it does seem like it could be a great way to build the movement and to explore and communicate long-termist ideas.

I’m sure there already is a lot that could perhaps be described as EA fiction, but I wanted to keep that in the comments.

There’s an exploration of the impact of fiction in a previous post https://​​forum.effectivealtruism.org/​​posts/​​Bc8J5P938BmzBuL9Y/​​when-can-writing-fiction-change-the-world that I think is worth a read.

Why write fiction?

  • It’ll be enjoyable! I’ll be honest, I want this to exist so I can read it, everything beyond this point in the post is just me trying to justify this opinion. The EA community doesn’t exist for my benefit, so keep my pro-fun bias in mind as you read on.

  • Any X-risk is a Science Fiction Premise: If you worry that the average person won’t be interested in nuclear war and the challenges of AI alignment, the success of the Terminator franchise demonstrates that you’re wrong. Same with asteroid impact (Armageddon, Deep Impact), biorisk (Resident Evil, 28 Days Later) and catastrophic climate change (The Day After Tomorrow). You have to try really hard to make the end of the world seem boring, and I kind of feel that most EA writing on the subject is actively trying to do that. Pivoting into science fiction seems pretty easy to me.

  • Communicating EA ideas to a wider audience: I can hear my audience screaming that the examples I’ve listed are scientifically inaccurate. Granted, I too am sceptical that highest impact intervention to prevent asteroid impact is to train oil drillers to become astronauts, or that zombies are the main biorisk we should concern ourselves with. Unfortunately, this is apparently where the average person is at right now. I doubt we’ll be able to change this, but countering bad stories with good stories seems more likely to spread beyond the EA community than thoughtful articles on 80,000 hours. I’m not expecting an EA blockbuster any time soon, but I’m optimistic that more realistic premises could still result in engaging stories.

  • Communicating ideas within the community: A lot of people say that it’s hard for them to imagine a scenario in which a AI gets given enough power to do serious damage without a human intervening, which is just asking for a speculative fiction scenario. I think this is true for a lot of other long-termist ideas, and a collection of short stories could get more people thinking about long-termist causes.

  • Doesn’t have to be too long or high budget: I’m a big fan of the short story, because it gives you enough time to fully explore an interesting idea in an accessible way, without either the writer or the reader having to commit to a series of novels or any high-budget special effects.

  • Captive Audience: If you already do writing, the incorporation of EA themes seems like a great way to lots of people here to read it, which will be great for getting the external validation and/​or scathing criticism you need.

  • Exploration of Morality: Everyone kind of thinks Utilitarians are evil. The average person seems to think they’re all one bad day away from pushing people in front of trolleys and harvesting their organs. If you want a villain to do something terrible, utilitarian logic can give him a plausible sounding motivation—Thanos from Infinity War and Ozymandias from Watchmen come to mind. One way to counter this would be to have more utilitarian protagonists, and I’d love to read a story in which the hero and villain resolve their dispute through a cost-benefit analysis rather than a fistfight. Fiction is a great way to explore different moral frameworks, especially when they conflict with one another.

Reasons not to write fiction

  • You don’t have the time: I’m sure a lot of you are very busy and have better things to do. Fair enough. I guess I just hope that writing is a higher impact use of free time than watching Netflix or feeling powerless dread in the face of political turmoil, which definitely occupies too much of my life.

  • You don’t want to: If the idea of writing EA themed fiction doesn’t seem interesting to you, it’s probably not a great personal fit.

  • You’re not good at it: If you’ve tried before and your writing is just really bad, maybe you can decide this isn’t your comparative advantage (that’s maybe me). Although nobody’s great the first time, maybe all you need is practice and feedback.

  • Low fidelity communication of information: Getting EA ideas out there isn’t very useful if they’re wildly distorted, and fiction has a tendency to do that. I’m thinking of this more as a way to get people interested and engaged with rather abstract cause areas, supplementing rather than replacing actual information and facts. Spreading inaccurate information about EA is a possible concern, although this may be less of a risk with an explicitly fictional work.

  • Nobody will take us seriously: We definitely shouldn’t become a weird creative writing club. Fiction should never become a core part of a movement that’s actually trying to do something productive. However, I doubt that a few bad science fiction stories will really damage the movement, we’re already in the realm of science fiction and people already think we’re weird. While sci-fi used to be trashy pulp for children, I think we’re at the point where the genre is taken seriously enough by the mainstream that we need not fear guilt by association.

  • What about the Information hazard? I’m not convinced by this one, but you may worry that fiction will inspire reality, and somebody will be inspired to create a bioweapon by your story about it. While I can understand taking reasonable precautions (i.e. don’t provide exact details of molecular cloning) this seems implausible to me. If we want governments to do more about biorisk, not telling anyone about it probably isn’t going to work (more here: https://​​forum.effectivealtruism.org/​​posts/​​ixeo9swGQTbYtLhji/​​bioinfohazards-1 ). If anything, this seems like a great excuse to justify scientific inaccuracy and speculation and focus on broad ideas, rather than closely sticking to what’s possible with current technology.