I don’t believe the EA → existential risk pipeline is the best pipeline to bring in people to work on existential risks. I actually think it’s a very suboptimal one and that absent how EA history played out, no one would ever have had answered the question of “What’s the best way to get people to work on existential risks?” with anything resembling “Let’s start them with the ideas of Peter Singer and then convince them that they should include future people in their circle of concern and do the math.” Obviously this argument has worked well for longtermist EAs, but it’s hard for me to believe that’s a more effective approach than appealing to people’s basic intuitions about why the world ending would be bad.
That said, I also do think closing this pipeline entirely would be quite bad. Sam Bankman-Fried, after all, seems to have come through that pipeline. But I think the EA <-> rationality pipeline is quite strong despite the two being different movements, and that the same would be true here for a separate existential risk prevention movement as well.
The claim isn’t that the current framing of all these cause areas as effective altruism doesn’t make any sense, but that it’s confusing and sub-optimal. According to Matt Yglesias, there are already “relevant people” who agree strongly enough with this that they’re trying to drop to just using the acronym EA—but I think that’s a poor solution and I hadn’t seen those concerns explained in full anywhere.
As multiple recent posts have said, EAs today try to sell the obvious important and important idea of preventing existential risk using counterintuitive ideas about caring about the far future, which most people won’t buy. This is an example of how viewing these cause areas through just the lens of altruism can be damaging to those causes.
And then it damages the global poverty and animal welfare cause areas because many who might be interested in the EA ideas to do good better there get turned off by EA’s intense focus on longtermism.