there are good people doing good work in all the segments.
Who do you think is in the ‘Longterm+EA’ and ‘Xrisk+EA’ buckets? As far as I know, even though they may have produced some pieces about those intersections, both Carl and Holden are in the center, and I doubt Will denies that humanity could go extinct or lose its potential either.
Longtermism + EA might include organizations primarily focused on the quality of the long-term future rather than its existence and scope (e.g., CLR, CRS, Sentience Institute), although the notion of existential risk construed broadly is a bit murky and potentially includes these (depending on how much of the reduction in quality threatens “humanity’s potential”)
Yep totally fair point, my examples were about pieces. However, note that the quote you pulled out referred to ‘good work in the segments’ (though this is quite a squirmy lawyerly point for me to make). Also, interestingly 2019-era Will was a bit more skeptical of xrisk—or at least wrote a piece exploring that view.
I’m a bit wary of naming specific people whose views I know personally but haven’t expressed them publicly, so I’ll just give some orgs who mostly work in those two segments, if you don’t mind:
‘Long-term + EA’: the APPG for Future Generations does a lot of work here, and I’d add Tyler John’s work (here & here), plausibly Beckstead’s thesis.
‘Xrisk + EA’: my impression is that some of the more normy groups OpenPhil have funded are here, working with the EA community on xrisk topics, but not necessarily buying longtermism.
Who do you think is in the ‘Longterm+EA’ and ‘Xrisk+EA’ buckets? As far as I know, even though they may have produced some pieces about those intersections, both Carl and Holden are in the center, and I doubt Will denies that humanity could go extinct or lose its potential either.
Longtermism + EA might include organizations primarily focused on the quality of the long-term future rather than its existence and scope (e.g., CLR, CRS, Sentience Institute), although the notion of existential risk construed broadly is a bit murky and potentially includes these (depending on how much of the reduction in quality threatens “humanity’s potential”)
Yep totally fair point, my examples were about pieces. However, note that the quote you pulled out referred to ‘good work in the segments’ (though this is quite a squirmy lawyerly point for me to make). Also, interestingly 2019-era Will was a bit more skeptical of xrisk—or at least wrote a piece exploring that view.
I’m a bit wary of naming specific people whose views I know personally but haven’t expressed them publicly, so I’ll just give some orgs who mostly work in those two segments, if you don’t mind:
‘Long-term + EA’: the APPG for Future Generations does a lot of work here, and I’d add Tyler John’s work (here & here), plausibly Beckstead’s thesis.
‘Xrisk + EA’: my impression is that some of the more normy groups OpenPhil have funded are here, working with the EA community on xrisk topics, but not necessarily buying longtermism.