Thanks! Yeah, I think you’re right; that + Sean’s specific reading suggestions seem like reasonably intuitive introductions to s-risks. Do you think there are similarly approachable introductions to specific s-risks, for when people ask “OK, I’m into this broad idea—what specific things could I work on?” (Or maybe this isn’t critical—maybe people are oddly receptive to weird ideas if they’ve had good first impressions.)
Well I think moral circle expansion is a good example. You could introduce s-risks as a general class of things, and then talk about moral circle expansion as a specific example. If you don’t have much time, you can keep it general and talk about future sentient beings; if animals have already been discussed, mention that idea that if factory farming or something similar was spread to astronomical scales, that could be very bad. If you’ve already talked about risks from AI, I think you could reasonably discuss some content about artificial sentience without that seeming like too much of a stretch. My current guess is that focusing on detailed simulations as an example is a nice balance between (1) intuitive / easy to imagine and (2) the sorts of beings we’re most concerned about. But I’m not confident in that, and Sentience Institute is planning a survey for October that will give a little insight into which sorts of future scenarios and entities people are most concerned about. If by “introductions” you’re looking for specific resource recommendations, there are short videos, podcasts, and academic articles depending on the desired length, format etc.
Some of the specifics might be technical, confusing, or esoteric, but if you’ve already discussed AI safety, you could quite easily discuss the concept of focusing on worst-case / “fail-safe” AI safety measures as a promising area. It’s also nice because it overlaps with extinction risk reduction work more (as far as I can tell) and seems like a more tractable goal than preventing extinction via AI or achieving highly aligned transformative AI.
A second example (after MCE) that benefits from being quite close to things that many people already care about is the area of reducing risks from political polarisation. I guess that explaining the link to s-risks might not be that quick though. Here’s a short writeup on this topic, and I know that Magnus Vinding of the Center for Reducing Suffering is publishing a book soon called Reasoned Politics, which I imagine includes some content on this. Its all a bit early stages though, so I probably wouldn’t pick this one at the moment.