The longtermist approach to philanthropy is different from mainstream, traditional philanthropy. When trying to describe a concept like Effective Altruism, sometimes the thing that most differentiates it is what stands out, consequently becoming its defining feature.
I am trying not to be snarky and dismissive, but among a large number of things I think this post gets wrong, this sticks out as a ridiculous and obviously wrong claim.
First, non-effective altruists have been giving to global threat reduction for most of a century, starting with nuclear nonproliferation, moving to the obvious focus of Gates Foundation’s pandemic prevention funding.
Second, big differences of EA including the impartial welfarist viewpoint, the embrace of animal welfare, measurement and impact maximization, and cause neutrality all strike me as similarly hugely controversial claims, in many ways much more controversial than valuing future human lives. Effective altruism was being attacked as bad and confused from the start, well before longtermism was a focus.
Lastly, you don’t need to be longtermist to worry about AI risk. There is debate about how long until AGI is likely, but I plan to live at least another 4 decades, which at this point is a pretty conservative estimate. So even if I didn’t have kids I’d like to see grow up, longtermism really isn’t needed to consider AI risks to be a priority.
I agree with your specific claims, but FWIW I thought albeit having some gaps the post was good overall, and unusually well written in terms of being engaging and accessible.
The reason why I overall still like this post is that I think at its core it’s based on (i) a correct diagnosis that there is an increased perception that ‘EA is just longtermism’ both within and outside the EA community, as reflected in prominent public criticism of EA that mostly talk about their opposition to longtermism, and (ii) it describes some mostly correct facts that explain and/or debunk the ‘EA is just longtermism’ claim (even though omitting some important facts and arguably underselling the influence of longtermism in EA overall).
E.g., on the claim you quote, a more charitable interpretation would be that longtermism is one of potentially several things that differentiates EA’s approach to philanthropy from traditional ones, and that this contributes to longtermism being a feature that outside observers tend to particularly focus on.
Now, while true in principle, my guess is that even this effect is fairly small compared to some other reasons behind the attention that longtermism gets. – But I think it’s quite far from ridiculous or obviously wrong.
I also agree that one doesn’t need to be a longtermist to worry about AI risk, and that an ideal version of the OP would have pointed that out somewhere, but again I don’t think this is damning for the post overall. And given that ‘longtermism’ as a philosophical view and ‘longtermism’ as a focus on specific cause areas such as AI, bio, and other global catastrophic risk are often conflated even within the EA community, I certainly think that conflation might play into current outside perceptions of ‘EA as longtermism’.
I am trying not to be snarky and dismissive, but among a large number of things I think this post gets wrong, this sticks out as a ridiculous and obviously wrong claim.
First, non-effective altruists have been giving to global threat reduction for most of a century, starting with nuclear nonproliferation, moving to the obvious focus of Gates Foundation’s pandemic prevention funding.
Second, big differences of EA including the impartial welfarist viewpoint, the embrace of animal welfare, measurement and impact maximization, and cause neutrality all strike me as similarly hugely controversial claims, in many ways much more controversial than valuing future human lives. Effective altruism was being attacked as bad and confused from the start, well before longtermism was a focus.
Lastly, you don’t need to be longtermist to worry about AI risk. There is debate about how long until AGI is likely, but I plan to live at least another 4 decades, which at this point is a pretty conservative estimate. So even if I didn’t have kids I’d like to see grow up, longtermism really isn’t needed to consider AI risks to be a priority.
I agree with your specific claims, but FWIW I thought albeit having some gaps the post was good overall, and unusually well written in terms of being engaging and accessible.
The reason why I overall still like this post is that I think at its core it’s based on (i) a correct diagnosis that there is an increased perception that ‘EA is just longtermism’ both within and outside the EA community, as reflected in prominent public criticism of EA that mostly talk about their opposition to longtermism, and (ii) it describes some mostly correct facts that explain and/or debunk the ‘EA is just longtermism’ claim (even though omitting some important facts and arguably underselling the influence of longtermism in EA overall).
E.g., on the claim you quote, a more charitable interpretation would be that longtermism is one of potentially several things that differentiates EA’s approach to philanthropy from traditional ones, and that this contributes to longtermism being a feature that outside observers tend to particularly focus on.
Now, while true in principle, my guess is that even this effect is fairly small compared to some other reasons behind the attention that longtermism gets. – But I think it’s quite far from ridiculous or obviously wrong.
I also agree that one doesn’t need to be a longtermist to worry about AI risk, and that an ideal version of the OP would have pointed that out somewhere, but again I don’t think this is damning for the post overall. And given that ‘longtermism’ as a philosophical view and ‘longtermism’ as a focus on specific cause areas such as AI, bio, and other global catastrophic risk are often conflated even within the EA community, I certainly think that conflation might play into current outside perceptions of ‘EA as longtermism’.