Your epistemic maps seem like a useful idea, since it would make it easier to visualize the most important cause areas for where we should push. Alexey Turchin created a number of roadmaps related to existential risks and AI safety, which seem similar to what you’re talking about creating. You should consider making an epistemic map of S-risks, or risks of astronomical suffering. Tobias Baumann and Brian Tomasik have written a number of articles on S-risks, which might help you get started. I also found this LessWrong article on worse than death scenarios, which breaks down some of the possible sources of worse than death scenarios and possible ways to prevent them. S-risks are a highly neglected cause area, since longtermist/AI safety research is generally about reducing extinction risks and preserving human values rather than averting worse than death scenarios. The Center on Long-Term Risk and the Center for Reducing Suffering have done significant research on S-risk prevention, which might be useful to you if you want to know the most promising research areas for reducing S-risks.
Thanks for the suggestion and links, I’ll be looking further into those! Is there some kind of specific question within the S-risk literature that you think would be good to focus on?
Your epistemic maps seem like a useful idea, since it would make it easier to visualize the most important cause areas for where we should push. Alexey Turchin created a number of roadmaps related to existential risks and AI safety, which seem similar to what you’re talking about creating. You should consider making an epistemic map of S-risks, or risks of astronomical suffering. Tobias Baumann and Brian Tomasik have written a number of articles on S-risks, which might help you get started. I also found this LessWrong article on worse than death scenarios, which breaks down some of the possible sources of worse than death scenarios and possible ways to prevent them. S-risks are a highly neglected cause area, since longtermist/AI safety research is generally about reducing extinction risks and preserving human values rather than averting worse than death scenarios. The Center on Long-Term Risk and the Center for Reducing Suffering have done significant research on S-risk prevention, which might be useful to you if you want to know the most promising research areas for reducing S-risks.
Thanks for the suggestion and links, I’ll be looking further into those! Is there some kind of specific question within the S-risk literature that you think would be good to focus on?