I agree that Effective Altruism and the existential risk prevention movement are not the same thing. Let me use this as an opportunity to trot out my Venn diagrams again. The point is that these communities and ideas overlap but don’t necessarily imply each other—you don’t have to agree to all of them because you agree with one of them, and there are good people doing good work in all the segments.
Cool diagram! I would suggest rephrasing the Longtermism description to say “We should focus directly on future generations.” As it is, it implies that people only work on animal welfare and global poverty because of moral positions, rather than concerns about tractability, etc.
there are good people doing good work in all the segments.
Who do you think is in the ‘Longterm+EA’ and ‘Xrisk+EA’ buckets? As far as I know, even though they may have produced some pieces about those intersections, both Carl and Holden are in the center, and I doubt Will denies that humanity could go extinct or lose its potential either.
Longtermism + EA might include organizations primarily focused on the quality of the long-term future rather than its existence and scope (e.g., CLR, CRS, Sentience Institute), although the notion of existential risk construed broadly is a bit murky and potentially includes these (depending on how much of the reduction in quality threatens “humanity’s potential”)
Yep totally fair point, my examples were about pieces. However, note that the quote you pulled out referred to ‘good work in the segments’ (though this is quite a squirmy lawyerly point for me to make). Also, interestingly 2019-era Will was a bit more skeptical of xrisk—or at least wrote a piece exploring that view.
I’m a bit wary of naming specific people whose views I know personally but haven’t expressed them publicly, so I’ll just give some orgs who mostly work in those two segments, if you don’t mind:
‘Long-term + EA’: the APPG for Future Generations does a lot of work here, and I’d add Tyler John’s work (here & here), plausibly Beckstead’s thesis.
‘Xrisk + EA’: my impression is that some of the more normy groups OpenPhil have funded are here, working with the EA community on xrisk topics, but not necessarily buying longtermism.
I agree that Effective Altruism and the existential risk prevention movement are not the same thing. Let me use this as an opportunity to trot out my Venn diagrams again. The point is that these communities and ideas overlap but don’t necessarily imply each other—you don’t have to agree to all of them because you agree with one of them, and there are good people doing good work in all the segments.
Cool diagram! I would suggest rephrasing the Longtermism description to say “We should focus directly on future generations.” As it is, it implies that people only work on animal welfare and global poverty because of moral positions, rather than concerns about tractability, etc.
Who do you think is in the ‘Longterm+EA’ and ‘Xrisk+EA’ buckets? As far as I know, even though they may have produced some pieces about those intersections, both Carl and Holden are in the center, and I doubt Will denies that humanity could go extinct or lose its potential either.
Longtermism + EA might include organizations primarily focused on the quality of the long-term future rather than its existence and scope (e.g., CLR, CRS, Sentience Institute), although the notion of existential risk construed broadly is a bit murky and potentially includes these (depending on how much of the reduction in quality threatens “humanity’s potential”)
Yep totally fair point, my examples were about pieces. However, note that the quote you pulled out referred to ‘good work in the segments’ (though this is quite a squirmy lawyerly point for me to make). Also, interestingly 2019-era Will was a bit more skeptical of xrisk—or at least wrote a piece exploring that view.
I’m a bit wary of naming specific people whose views I know personally but haven’t expressed them publicly, so I’ll just give some orgs who mostly work in those two segments, if you don’t mind:
‘Long-term + EA’: the APPG for Future Generations does a lot of work here, and I’d add Tyler John’s work (here & here), plausibly Beckstead’s thesis.
‘Xrisk + EA’: my impression is that some of the more normy groups OpenPhil have funded are here, working with the EA community on xrisk topics, but not necessarily buying longtermism.