I agree with your specific claims, but FWIW I thought albeit having some gaps the post was good overall, and unusually well written in terms of being engaging and accessible.
The reason why I overall still like this post is that I think at its core it’s based on (i) a correct diagnosis that there is an increased perception that ‘EA is just longtermism’ both within and outside the EA community, as reflected in prominent public criticism of EA that mostly talk about their opposition to longtermism, and (ii) it describes some mostly correct facts that explain and/or debunk the ‘EA is just longtermism’ claim (even though omitting some important facts and arguably underselling the influence of longtermism in EA overall).
E.g., on the claim you quote, a more charitable interpretation would be that longtermism is one of potentially several things that differentiates EA’s approach to philanthropy from traditional ones, and that this contributes to longtermism being a feature that outside observers tend to particularly focus on.
Now, while true in principle, my guess is that even this effect is fairly small compared to some other reasons behind the attention that longtermism gets. – But I think it’s quite far from ridiculous or obviously wrong.
I also agree that one doesn’t need to be a longtermist to worry about AI risk, and that an ideal version of the OP would have pointed that out somewhere, but again I don’t think this is damning for the post overall. And given that ‘longtermism’ as a philosophical view and ‘longtermism’ as a focus on specific cause areas such as AI, bio, and other global catastrophic risk are often conflated even within the EA community, I certainly think that conflation might play into current outside perceptions of ‘EA as longtermism’.
I agree with your specific claims, but FWIW I thought albeit having some gaps the post was good overall, and unusually well written in terms of being engaging and accessible.
The reason why I overall still like this post is that I think at its core it’s based on (i) a correct diagnosis that there is an increased perception that ‘EA is just longtermism’ both within and outside the EA community, as reflected in prominent public criticism of EA that mostly talk about their opposition to longtermism, and (ii) it describes some mostly correct facts that explain and/or debunk the ‘EA is just longtermism’ claim (even though omitting some important facts and arguably underselling the influence of longtermism in EA overall).
E.g., on the claim you quote, a more charitable interpretation would be that longtermism is one of potentially several things that differentiates EA’s approach to philanthropy from traditional ones, and that this contributes to longtermism being a feature that outside observers tend to particularly focus on.
Now, while true in principle, my guess is that even this effect is fairly small compared to some other reasons behind the attention that longtermism gets. – But I think it’s quite far from ridiculous or obviously wrong.
I also agree that one doesn’t need to be a longtermist to worry about AI risk, and that an ideal version of the OP would have pointed that out somewhere, but again I don’t think this is damning for the post overall. And given that ‘longtermism’ as a philosophical view and ‘longtermism’ as a focus on specific cause areas such as AI, bio, and other global catastrophic risk are often conflated even within the EA community, I certainly think that conflation might play into current outside perceptions of ‘EA as longtermism’.