Phil Torres is held in particularly low esteem by people on the EA Forum for what I think are good reasons: his arguments are often flimsy and on top of that he has made various unfounded accusations. But criticism of EA in general is one of the more popular tags and has some really good material from people internal and external to the EA community who have careful arguments—here are some I recommend:
That said, these still come from a perspective of doing the most good so if you don’t e.g. believe that saving 1000 lives is better than saving 1 life, you’ll probably bounce off of the community as a whole.
Also, longtermism is not really a founding principle, it’s more of a view that some EAs hold which heavily influences their altruistic decisions. If there are axioms of EA they’re something like this:
1) Consequences matter (which any moral philosophy worth its salt agrees, although to varying extent on what else matters) 2) Pay attention to scope, i.e. 100X lives saved is way, way better than saving one life.
The mantra of saving 1000 lives versus 1 is such a red flag that this community behaves like a cult. Of course a preposterous claim like that (“I guess you value human life less than us”) is not going to win any outsiders over. How about this, there are thousands of people dying needless, painful deaths every day and the EA community is focused on optimizing optics in AI. Not to mention the hypothetical future human proposition. You should all be ashamed of yourselves.
Phil Torres is held in particularly low esteem by people on the EA Forum for what I think are good reasons: his arguments are often flimsy and on top of that he has made various unfounded accusations. But criticism of EA in general is one of the more popular tags and has some really good material from people internal and external to the EA community who have careful arguments—here are some I recommend:
https://forum.effectivealtruism.org/posts/uxFvTnzSgw8uakNBp/effective-altruism-is-an-ideology-not-just-a-question
https://forum.effectivealtruism.org/posts/Jxfq6xCP9ZoTBFewA/why-i-am-probably-not-a-longtermist
https://forum.effectivealtruism.org/posts/LJwGdex4nn76iA8xy/some-blindspots-in-rationality-and-effective-altruism
https://forum.effectivealtruism.org/posts/pxALB46SEkwNbfiNS/the-motivated-reasoning-critique-of-effective-altruism
That said, these still come from a perspective of doing the most good so if you don’t e.g. believe that saving 1000 lives is better than saving 1 life, you’ll probably bounce off of the community as a whole.
Also, longtermism is not really a founding principle, it’s more of a view that some EAs hold which heavily influences their altruistic decisions. If there are axioms of EA they’re something like this:
The mantra of saving 1000 lives versus 1 is such a red flag that this community behaves like a cult. Of course a preposterous claim like that (“I guess you value human life less than us”) is not going to win any outsiders over. How about this, there are thousands of people dying needless, painful deaths every day and the EA community is focused on optimizing optics in AI. Not to mention the hypothetical future human proposition. You should all be ashamed of yourselves.