I don’t believe the EA → existential risk pipeline is the best pipeline to bring in people to work on existential risks. I actually think it’s a very suboptimal one and that absent how EA history played out, no one would ever have had answered the question of “What’s the best way to get people to work on existential risks?” with anything resembling “Let’s start them with the ideas of Peter Singer and then convince them that they should include future people in their circle of concern and do the math.” Obviously this argument has worked well for longtermist EAs, but it’s hard for me to believe that’s a more effective approach than appealing to people’s basic intuitions about why the world ending would be bad.
That said, I also do think closing this pipeline entirely would be quite bad. Sam Bankman-Fried, after all, seems to have come through that pipeline. But I think the EA <-> rationality pipeline is quite strong despite the two being different movements, and that the same would be true here for a separate existential risk prevention movement as well.
I don’t know if it’s the best pipeline, but a lot of people have come through this pipeline who were initially skeptical of existential risks. So empirically, it seems to be a more effective pipeline than people might think. I guess one of the advantages is that people only need to resonate with one of the main cause areas to initially get involved and they can shift cause areas over time and I think it’s really important to have a pipeline like this.
I don’t believe the EA → existential risk pipeline is the best pipeline to bring in people to work on existential risks. I actually think it’s a very suboptimal one and that absent how EA history played out, no one would ever have had answered the question of “What’s the best way to get people to work on existential risks?” with anything resembling “Let’s start them with the ideas of Peter Singer and then convince them that they should include future people in their circle of concern and do the math.” Obviously this argument has worked well for longtermist EAs, but it’s hard for me to believe that’s a more effective approach than appealing to people’s basic intuitions about why the world ending would be bad.
That said, I also do think closing this pipeline entirely would be quite bad. Sam Bankman-Fried, after all, seems to have come through that pipeline. But I think the EA <-> rationality pipeline is quite strong despite the two being different movements, and that the same would be true here for a separate existential risk prevention movement as well.
I don’t know if it’s the best pipeline, but a lot of people have come through this pipeline who were initially skeptical of existential risks. So empirically, it seems to be a more effective pipeline than people might think. I guess one of the advantages is that people only need to resonate with one of the main cause areas to initially get involved and they can shift cause areas over time and I think it’s really important to have a pipeline like this.