This is a fair criticism! My short answer is that, as I perceive it, most people writing new EA pitches, designing fellowship curricula, giving EA career advice, etc, are longtermists and give pitches optimised for producing more people working on important longtermist stuff. And this post was a reaction to what I perceive as a failure in such pitches by focusing on moral philosophy. And I’m not really trying to engage with the broader question of whether this is a problem in the EA movement. Now OpenPhil is planning on doing neartermist EA movement building funding, maybe this’ll change?
Personally, I’m not really a longtermist, but think it’s way more important to get people working on AI/bio stuff from a neartermist lens, so I’m pretty OK with optimising my outreach for producing more AI and bio people. Though I’d be fine with low cost ways to also mention ‘and by the way, global health and animal welfare are also things some EAs care about, here’s how to find the relevant people and communities’.
I think to the extent you are trying to draw the focus away from longtermist philosophical arguments when advocating for people to work on extinction risk reduction, that seems like a perfectly reasonable thing to suggest (though I’m unsure which side of the fence I’m on).
But I don’t want people casually equivocating between x-risk reduction and EA, relegating the rest of the community to a footnote.
I think it’s a misleading depiction of the in-practice composition of the community,
I think it’s unfair to the people who aren’t convinced by x-risk arguments,
I think it could actually just make us worse at finding the right answers to cause prioritization questions.
This is a fair criticism! My short answer is that, as I perceive it, most people writing new EA pitches, designing fellowship curricula, giving EA career advice, etc, are longtermists and give pitches optimised for producing more people working on important longtermist stuff. And this post was a reaction to what I perceive as a failure in such pitches by focusing on moral philosophy. And I’m not really trying to engage with the broader question of whether this is a problem in the EA movement. Now OpenPhil is planning on doing neartermist EA movement building funding, maybe this’ll change?
Personally, I’m not really a longtermist, but think it’s way more important to get people working on AI/bio stuff from a neartermist lens, so I’m pretty OK with optimising my outreach for producing more AI and bio people. Though I’d be fine with low cost ways to also mention ‘and by the way, global health and animal welfare are also things some EAs care about, here’s how to find the relevant people and communities’.
I think to the extent you are trying to draw the focus away from longtermist philosophical arguments when advocating for people to work on extinction risk reduction, that seems like a perfectly reasonable thing to suggest (though I’m unsure which side of the fence I’m on).
But I don’t want people casually equivocating between x-risk reduction and EA, relegating the rest of the community to a footnote.
I think it’s a misleading depiction of the in-practice composition of the community,
I think it’s unfair to the people who aren’t convinced by x-risk arguments,
I think it could actually just make us worse at finding the right answers to cause prioritization questions.