Effective Altruists want to effectively help others. I think it makes perfect sense for this to be an umbrella movement that includes a range of different cause areas. (And literally saving the world is obviously a legitimate area of interest for altruists!)
Cause-specific movements are great, but they aren’t a replacement for EA as a cause-neutral movement to effectively do good.
The claim isn’t that the current framing of all these cause areas as effective altruism doesn’t make any sense, but that it’s confusing and sub-optimal. According to Matt Yglesias, there are already “relevant people” who agree strongly enough with this that they’re trying to drop to just using the acronym EA—but I think that’s a poor solution and I hadn’t seen those concerns explained in full anywhere.
As multiplerecentposts have said, EAs today try to sell the obvious important and important idea of preventing existential risk using counterintuitive ideas about caring about the far future, which most people won’t buy. This is an example of how viewing these cause areas through just the lens of altruism can be damaging to those causes.
And then it damages the global poverty and animal welfare cause areas because many who might be interested in the EA ideas to do good better there get turned off by EA’s intense focus on longtermism.
Effective Altruists want to effectively help others. I think it makes perfect sense for this to be an umbrella movement that includes a range of different cause areas. (And literally saving the world is obviously a legitimate area of interest for altruists!)
Cause-specific movements are great, but they aren’t a replacement for EA as a cause-neutral movement to effectively do good.
The claim isn’t that the current framing of all these cause areas as effective altruism doesn’t make any sense, but that it’s confusing and sub-optimal. According to Matt Yglesias, there are already “relevant people” who agree strongly enough with this that they’re trying to drop to just using the acronym EA—but I think that’s a poor solution and I hadn’t seen those concerns explained in full anywhere.
As multiple recent posts have said, EAs today try to sell the obvious important and important idea of preventing existential risk using counterintuitive ideas about caring about the far future, which most people won’t buy. This is an example of how viewing these cause areas through just the lens of altruism can be damaging to those causes.
And then it damages the global poverty and animal welfare cause areas because many who might be interested in the EA ideas to do good better there get turned off by EA’s intense focus on longtermism.
A phrase that I really like to describe longtermism is “altruistic rationalty” which covers activities that are a subset of “effective altruism”