Cool initiative! I’m worried about a failure mode where these just stay in the EA blogosphere and don’t reach the target audiences who we’d most like to engage with these ideas (either because they’re written with language and style that isn’t well-received elsewhere, or no active effort is made to share them to people who may be receptive).
Do you share this concern, and do you have a sense of mitigate it if you share the concern?
Yeah, as a previous top-three winner of the EA Forum Creative Writing Contest (see my story here) and of Future of Life Institute’s AI Worldbuilding contest (here), I agree that it seems like the default outcome is that even the winning stories don’t get a huge amount of circulation. The real impact would come from writing the one story that actually does go viral beyond the EA community. But this seems pretty hard to do; perhaps better to pick something that has already gone viral (perhaps an existing story like one of the Yudkowsky essays, or perhaps expanding on something like a very popular tweet to turn it into a story), and try to improve its presentation by polishing it, and perhaps adding illustrations or porting to other mediums like video / audio / etc.
That is why I am currently spending most of my EA effort helping out RationalAnimations, which sometimes writes original stuff but often adapts essays & topics that have preexisting traction within EA. (Suggestions welcome for things we might consider adapting!)
Could also be a cool mini-project of somebody’s, to go through the archive of existing rationalist/EA stories, and try and spruce them up with midjourney-style AI artwork; you might even be able to create some passable, relatively low-effort youtube videos just by doing a dramatic reading of the story and matching it up with panning imagery of midjourney / stock art?
On the other hand, writing stories is fun, and a $3000 prize pool is not too much to spend in the hopes of maybe generating the next viral EA story! I guess my concrete advice would be to put more emphasis on starting from a seed of something that’s already shown some viral potential (like a popular tweet making some point about AI safety, or a fanfic-style spinoff of a well-known story that is tweaked to contain an AI-relevant lesson, or etc).
Absolutely. Part of the hope is that, if we can gather a good collection of stories, we can find ways to promote and publish some or all of them, whether through traditional publishing or audiobooks or youtube animated stories.
I would hope given its a “fable” writing contest, almost by default these stories would be completely accessible to most of the general public, like the Yudowsky classic ” sorting pebbles into correct heaps or likely even less nerdy than that. But OP can clarify!
I hadn’t read this one first, but honestly I’m not sure if this counts as “completely accessible to the general public”. I’d expect an accessible fable to be less rife with already assumed concepts about the key topics as this one is (like, don’t say “utility maximizer”, rather present a situation which makes it clear naturally what a utility maximizer is).
Cool initiative! I’m worried about a failure mode where these just stay in the EA blogosphere and don’t reach the target audiences who we’d most like to engage with these ideas (either because they’re written with language and style that isn’t well-received elsewhere, or no active effort is made to share them to people who may be receptive).
Do you share this concern, and do you have a sense of mitigate it if you share the concern?
Yeah, as a previous top-three winner of the EA Forum Creative Writing Contest (see my story here) and of Future of Life Institute’s AI Worldbuilding contest (here), I agree that it seems like the default outcome is that even the winning stories don’t get a huge amount of circulation. The real impact would come from writing the one story that actually does go viral beyond the EA community. But this seems pretty hard to do; perhaps better to pick something that has already gone viral (perhaps an existing story like one of the Yudkowsky essays, or perhaps expanding on something like a very popular tweet to turn it into a story), and try to improve its presentation by polishing it, and perhaps adding illustrations or porting to other mediums like video / audio / etc.
That is why I am currently spending most of my EA effort helping out RationalAnimations, which sometimes writes original stuff but often adapts essays & topics that have preexisting traction within EA. (Suggestions welcome for things we might consider adapting!)
Could also be a cool mini-project of somebody’s, to go through the archive of existing rationalist/EA stories, and try and spruce them up with midjourney-style AI artwork; you might even be able to create some passable, relatively low-effort youtube videos just by doing a dramatic reading of the story and matching it up with panning imagery of midjourney / stock art?
On the other hand, writing stories is fun, and a $3000 prize pool is not too much to spend in the hopes of maybe generating the next viral EA story! I guess my concrete advice would be to put more emphasis on starting from a seed of something that’s already shown some viral potential (like a popular tweet making some point about AI safety, or a fanfic-style spinoff of a well-known story that is tweaked to contain an AI-relevant lesson, or etc).
Absolutely. Part of the hope is that, if we can gather a good collection of stories, we can find ways to promote and publish some or all of them, whether through traditional publishing or audiobooks or youtube animated stories.
I would hope given its a “fable” writing contest, almost by default these stories would be
completelyaccessible to most of the general public, like the Yudowsky classic ” sorting pebbles into correct heaps or likely even less nerdy than that. But OP can clarify!I hadn’t read this one first, but honestly I’m not sure if this counts as “completely accessible to the general public”. I’d expect an accessible fable to be less rife with already assumed concepts about the key topics as this one is (like, don’t say “utility maximizer”, rather present a situation which makes it clear naturally what a utility maximizer is).
Good point!