I agree with 2. Not sure about 3 as I haven’t reviewed the Introductory fellowship in depth myself.
But on 1, I want to briefly make the case that s-risks don’t have to be/seem much more weird than extinction risk work. I’ve sometimes framed it as: The future is vast and it could be very good or very bad. So we probably want to both try to preserve it for the good stuff and improve the quality. (Although perhaps CLR et al don’t actually agree with the preserving bit, they just don’t vocally object to it for coordination reasons etc)
There are also ways it can seem less weird. E.g. you don’t have make complex arguments about wanting to ensure a thing that hasn’t happened yet continues to happen, and missed potential, you can just say: “here’s a potential bad thing. We should stop that!!” See https://forum.effectivealtruism.org/posts/seoWmmoaiXTJCiX5h/the-psychology-of-population-ethics for evidence that people, on average, weigh (future/possible) suffering more than happiness.
Also consider that one way of looking at moral circle expansion (one method of reducing s-risks) is that its basically just what many social justicey types are focusing on anyway—increasing protection and consideration of marginalised groups. It just takes it further.
Thanks! Yeah, I think you’re right; that + Sean’s specific reading suggestions seem like reasonably intuitive introductions to s-risks. Do you think there are similarly approachable introductions to specific s-risks, for when people ask “OK, I’m into this broad idea—what specific things could I work on?” (Or maybe this isn’t critical—maybe people are oddly receptive to weird ideas if they’ve had good first impressions.)
Well I think moral circle expansion is a good example. You could introduce s-risks as a general class of things, and then talk about moral circle expansion as a specific example. If you don’t have much time, you can keep it general and talk about future sentient beings; if animals have already been discussed, mention that idea that if factory farming or something similar was spread to astronomical scales, that could be very bad. If you’ve already talked about risks from AI, I think you could reasonably discuss some content about artificial sentience without that seeming like too much of a stretch. My current guess is that focusing on detailed simulations as an example is a nice balance between (1) intuitive / easy to imagine and (2) the sorts of beings we’re most concerned about. But I’m not confident in that, and Sentience Institute is planning a survey for October that will give a little insight into which sorts of future scenarios and entities people are most concerned about. If by “introductions” you’re looking for specific resource recommendations, there are short videos, podcasts, and academic articles depending on the desired length, format etc.
Some of the specifics might be technical, confusing, or esoteric, but if you’ve already discussed AI safety, you could quite easily discuss the concept of focusing on worst-case / “fail-safe” AI safety measures as a promising area. It’s also nice because it overlaps with extinction risk reduction work more (as far as I can tell) and seems like a more tractable goal than preventing extinction via AI or achieving highly aligned transformative AI.
A second example (after MCE) that benefits from being quite close to things that many people already care about is the area of reducing risks from political polarisation. I guess that explaining the link to s-risks might not be that quick though. Here’s a short writeup on this topic, and I know that Magnus Vinding of the Center for Reducing Suffering is publishing a book soon called Reasoned Politics, which I imagine includes some content on this. Its all a bit early stages though, so I probably wouldn’t pick this one at the moment.
I agree with 2. Not sure about 3 as I haven’t reviewed the Introductory fellowship in depth myself.
But on 1, I want to briefly make the case that s-risks don’t have to be/seem much more weird than extinction risk work. I’ve sometimes framed it as: The future is vast and it could be very good or very bad. So we probably want to both try to preserve it for the good stuff and improve the quality. (Although perhaps CLR et al don’t actually agree with the preserving bit, they just don’t vocally object to it for coordination reasons etc)
There are also ways it can seem less weird. E.g. you don’t have make complex arguments about wanting to ensure a thing that hasn’t happened yet continues to happen, and missed potential, you can just say: “here’s a potential bad thing. We should stop that!!” See https://forum.effectivealtruism.org/posts/seoWmmoaiXTJCiX5h/the-psychology-of-population-ethics for evidence that people, on average, weigh (future/possible) suffering more than happiness.
Also consider that one way of looking at moral circle expansion (one method of reducing s-risks) is that its basically just what many social justicey types are focusing on anyway—increasing protection and consideration of marginalised groups. It just takes it further.
Thanks! Yeah, I think you’re right; that + Sean’s specific reading suggestions seem like reasonably intuitive introductions to s-risks. Do you think there are similarly approachable introductions to specific s-risks, for when people ask “OK, I’m into this broad idea—what specific things could I work on?” (Or maybe this isn’t critical—maybe people are oddly receptive to weird ideas if they’ve had good first impressions.)
Well I think moral circle expansion is a good example. You could introduce s-risks as a general class of things, and then talk about moral circle expansion as a specific example. If you don’t have much time, you can keep it general and talk about future sentient beings; if animals have already been discussed, mention that idea that if factory farming or something similar was spread to astronomical scales, that could be very bad. If you’ve already talked about risks from AI, I think you could reasonably discuss some content about artificial sentience without that seeming like too much of a stretch. My current guess is that focusing on detailed simulations as an example is a nice balance between (1) intuitive / easy to imagine and (2) the sorts of beings we’re most concerned about. But I’m not confident in that, and Sentience Institute is planning a survey for October that will give a little insight into which sorts of future scenarios and entities people are most concerned about. If by “introductions” you’re looking for specific resource recommendations, there are short videos, podcasts, and academic articles depending on the desired length, format etc.
Some of the specifics might be technical, confusing, or esoteric, but if you’ve already discussed AI safety, you could quite easily discuss the concept of focusing on worst-case / “fail-safe” AI safety measures as a promising area. It’s also nice because it overlaps with extinction risk reduction work more (as far as I can tell) and seems like a more tractable goal than preventing extinction via AI or achieving highly aligned transformative AI.
A second example (after MCE) that benefits from being quite close to things that many people already care about is the area of reducing risks from political polarisation. I guess that explaining the link to s-risks might not be that quick though. Here’s a short writeup on this topic, and I know that Magnus Vinding of the Center for Reducing Suffering is publishing a book soon called Reasoned Politics, which I imagine includes some content on this. Its all a bit early stages though, so I probably wouldn’t pick this one at the moment.