Hey Mauricio, thanks for your reply. I’ll reply later with some more remarks, but I’ll list some quick thoughts here:
I agree that s-risks can seem more “out there,” but I think some of the readings I’ve listed do a good job of emphasizing the more general worry that the future involves a great deal of suffering. It seems to me that the asymmetry in content about extinction risks vs. s-risks is less about the particular examples and more about the general framework. Taking this into account, perhaps we could write up something to be a gentler introduction to s-risks. The goal is to prevent people from identifying “longtermism” as just extinction risk reduction.
Yeah this is definitely true, but completely omitting such a distinctively EA concept like s-risks seems to suggest that something needs to be changed.
I think the reading I listed entitled “Common Ground for Longtermists” should address this worry, but perhaps we could add more. I tend to think that the potential for antagonism is outweighed by the value of broader thinking, but your worry is worth addressing.
Ah sorry, I hadn’t seen your list of proposed readings (I wrongly thought the relevant link was just a link to the old syllabus). Your points about those readings in (1) and (3) do seem to help with these concerns. A few thoughts:
The dichotomy between x-risk reduction and s-risk reduction seems off to me. As I understand them, prominent definitions of x-risks [1][2][3] (especially the more thorough/careful discussion in [3]) are all broad enough for s-risks to count as x-risks (especially if we’re talking about permanent / locked-in s-risks, which I assume we are, given the context of longtermism).
One worry is that the proposed list might be overcorrecting—through half of its content being from CFR, it seems to suggest that about half of longtermists endorse prioritizing s-risk reduction, which is a large over-estimate.
As you say, we want to discourage uncritical acceptance of views presented in the syllabus, so it seems good for such a list to include criticisms of both approaches to improving the long-term future, at least in recommended readings. (Yup, the current syllabus is also light on those, although week 7 does include criticisms of longtermism.)
completely omitting such a distinctively EA concept like s-risks seems to suggest that something needs to be changed.
I’m not sure about that. The intro program omits plenty of distinctively EA concepts due to time/attention constraints—here are some other prominent ideas in EA that (if I remember correctly) are currently omitted from the core/required readings of the introductory program: consequentialism, cause X, patient longtermism, wild animal suffering, EA movement-building, improving institutional decision making, decision theory, the unilateralist’s curse, moral uncertainty & cooperation, Bayesian reasoning, forecasting, history of well-being, cluelessness, mental welfare, and global priorities research.
A bunch of these (and s-risk reduction) are covered in depth in (some versions of) the in-depth EA program.
(Like I mentioned earlier, I’m pretty open to there being some discussion of s-risks in the intro syllabus. Mostly wondering about the degree to which it should be covered.)
Yeah my mistake, I should have been clearer about the link for the proposed changes. I think we’re mostly in agreement. My proposed list is probably overcorrecting, and I definitely agree that more criticisms of both approaches are needed. Perhaps a compromise would be just including the reading entitled “Common Ground for Longtermists,” or something similar.
I think you’re right that many definitions of x-risk are broad enough to include (most) s-risks, but I’m mostly concerned about the term “x-risk” losing this broader meaning and instead just referring to extinction risks. It’s probably too nuanced for an intro syllabus, but MichaelA’s post (https://forum.effectivealtruism.org/posts/AJbZ2hHR4bmeZKznG/venn-diagrams-of-existential-global-and-suffering) could help people to better understand the space of possible problems.
Hey Mauricio, thanks for your reply. I’ll reply later with some more remarks, but I’ll list some quick thoughts here:
I agree that s-risks can seem more “out there,” but I think some of the readings I’ve listed do a good job of emphasizing the more general worry that the future involves a great deal of suffering. It seems to me that the asymmetry in content about extinction risks vs. s-risks is less about the particular examples and more about the general framework. Taking this into account, perhaps we could write up something to be a gentler introduction to s-risks. The goal is to prevent people from identifying “longtermism” as just extinction risk reduction.
Yeah this is definitely true, but completely omitting such a distinctively EA concept like s-risks seems to suggest that something needs to be changed.
I think the reading I listed entitled “Common Ground for Longtermists” should address this worry, but perhaps we could add more. I tend to think that the potential for antagonism is outweighed by the value of broader thinking, but your worry is worth addressing.
Thanks!
Ah sorry, I hadn’t seen your list of proposed readings (I wrongly thought the relevant link was just a link to the old syllabus). Your points about those readings in (1) and (3) do seem to help with these concerns. A few thoughts:
The dichotomy between x-risk reduction and s-risk reduction seems off to me. As I understand them, prominent definitions of x-risks [1] [2] [3] (especially the more thorough/careful discussion in [3]) are all broad enough for s-risks to count as x-risks (especially if we’re talking about permanent / locked-in s-risks, which I assume we are, given the context of longtermism).
One worry is that the proposed list might be overcorrecting—through half of its content being from CFR, it seems to suggest that about half of longtermists endorse prioritizing s-risk reduction, which is a large over-estimate.
As you say, we want to discourage uncritical acceptance of views presented in the syllabus, so it seems good for such a list to include criticisms of both approaches to improving the long-term future, at least in recommended readings. (Yup, the current syllabus is also light on those, although week 7 does include criticisms of longtermism.)
I’m not sure about that. The intro program omits plenty of distinctively EA concepts due to time/attention constraints—here are some other prominent ideas in EA that (if I remember correctly) are currently omitted from the core/required readings of the introductory program: consequentialism, cause X, patient longtermism, wild animal suffering, EA movement-building, improving institutional decision making, decision theory, the unilateralist’s curse, moral uncertainty & cooperation, Bayesian reasoning, forecasting, history of well-being, cluelessness, mental welfare, and global priorities research.
A bunch of these (and s-risk reduction) are covered in depth in (some versions of) the in-depth EA program.
(Like I mentioned earlier, I’m pretty open to there being some discussion of s-risks in the intro syllabus. Mostly wondering about the degree to which it should be covered.)
Yeah my mistake, I should have been clearer about the link for the proposed changes. I think we’re mostly in agreement. My proposed list is probably overcorrecting, and I definitely agree that more criticisms of both approaches are needed. Perhaps a compromise would be just including the reading entitled “Common Ground for Longtermists,” or something similar.
I think you’re right that many definitions of x-risk are broad enough to include (most) s-risks, but I’m mostly concerned about the term “x-risk” losing this broader meaning and instead just referring to extinction risks. It’s probably too nuanced for an intro syllabus, but MichaelA’s post (https://forum.effectivealtruism.org/posts/AJbZ2hHR4bmeZKznG/venn-diagrams-of-existential-global-and-suffering) could help people to better understand the space of possible problems.