Some criticism of the EA Virtual Programs introductory fellowship syllabus:
I was recently looking through the EA Virtual Programs introductory fellowship syllabus. I was disappointed to see zero mention of s-risks or the possible relevance of animal advocacy to longtermism in the sections on longtermism and existential risk.
I understand that mainstream EA is largely classical utilitarian in practice (even if it recognizes moral uncertainty in principle), but it seems irresponsible not to expose people to these ideas even by the lights of classical utilitarianism.
What explains this omission? A few possibilities:
The people who created the fellowship syllabus aren’t very familiar with s-risks and the possible relevance of animal advocacy to longtermism.
This seems plausible to me. I think founder effects heavily influence EA, and the big figures in mainstream EA don’t seem to discuss these ideas very much.
These topics seem too weird for an introductory fellowship.
It’s true that a lot of s-risk scenarios are weird. But there’s always some trade-off that we have to make between mainstream palatability and potential for impact. The inclusion of x-risks shows that we are willing to make this trade-off when the ideas discussed are important. To justify the exclusion of s-risks, the weirdness-to-impact ratio would have to be much larger. This might be true of particular s-risk scenarios, but even so, general discussions of future suffering need not reference these weirder scenarios. It could also make sense to include discussion of s-risks as optional reading (so as to avoid turning off people who are less open-minded).
The possible relevance of animal advocacy to longtermism does not strike me as any weirder than the discussion of factory farming, and the omission of this material makes longtermism seem very anthropocentric. (I think we could also improve on this by referring to the long-term future using terms like “The Future of Life” rather than “The Future of Humanity.”)
More generally, I think that the EA community could do a much better job of communicating the core premise of longtermism without committing itself too much to particular ethical views (e.g., classical utilitarianism) or empirical views (e.g., that animals won’t exist in large numbers and thus are irrelevant to longtermism). I see many of my peers just defer to the values supported by organizations like 80,000 Hours without reflecting much on their own positions, which strikes me as quite problematic. The failure to include a broader range of ideas and topics in introductory fellowships only exacerbates this problem of groupthink.
[Note: it’s quite possible that the syllabus is not completely finished at this point, so perhaps these issues will be addressed. But I think these complaints apply more generally, so I felt like posting this.]
This seems fixable by sending an email to whomever is organizing the syllabus, possibly after writing a small syllabus on s-risks yourself, or by finding one already written.
Some criticism of the EA Virtual Programs introductory fellowship syllabus:
I was recently looking through the EA Virtual Programs introductory fellowship syllabus. I was disappointed to see zero mention of s-risks or the possible relevance of animal advocacy to longtermism in the sections on longtermism and existential risk.
I understand that mainstream EA is largely classical utilitarian in practice (even if it recognizes moral uncertainty in principle), but it seems irresponsible not to expose people to these ideas even by the lights of classical utilitarianism.
What explains this omission? A few possibilities:
The people who created the fellowship syllabus aren’t very familiar with s-risks and the possible relevance of animal advocacy to longtermism.
This seems plausible to me. I think founder effects heavily influence EA, and the big figures in mainstream EA don’t seem to discuss these ideas very much.
These topics seem too weird for an introductory fellowship.
It’s true that a lot of s-risk scenarios are weird. But there’s always some trade-off that we have to make between mainstream palatability and potential for impact. The inclusion of x-risks shows that we are willing to make this trade-off when the ideas discussed are important. To justify the exclusion of s-risks, the weirdness-to-impact ratio would have to be much larger. This might be true of particular s-risk scenarios, but even so, general discussions of future suffering need not reference these weirder scenarios. It could also make sense to include discussion of s-risks as optional reading (so as to avoid turning off people who are less open-minded).
The possible relevance of animal advocacy to longtermism does not strike me as any weirder than the discussion of factory farming, and the omission of this material makes longtermism seem very anthropocentric. (I think we could also improve on this by referring to the long-term future using terms like “The Future of Life” rather than “The Future of Humanity.”)
More generally, I think that the EA community could do a much better job of communicating the core premise of longtermism without committing itself too much to particular ethical views (e.g., classical utilitarianism) or empirical views (e.g., that animals won’t exist in large numbers and thus are irrelevant to longtermism). I see many of my peers just defer to the values supported by organizations like 80,000 Hours without reflecting much on their own positions, which strikes me as quite problematic. The failure to include a broader range of ideas and topics in introductory fellowships only exacerbates this problem of groupthink.
[Note: it’s quite possible that the syllabus is not completely finished at this point, so perhaps these issues will be addressed. But I think these complaints apply more generally, so I felt like posting this.]
This seems fixable by sending an email to whomever is organizing the syllabus, possibly after writing a small syllabus on s-risks yourself, or by finding one already written.
Yeah I have been in touch with them. Thanks!