The claim isn’t that the current framing of all these cause areas as effective altruism doesn’t make any sense, but that it’s confusing and sub-optimal. According to Matt Yglesias, there are already “relevant people” who agree strongly enough with this that they’re trying to drop to just using the acronym EA—but I think that’s a poor solution and I hadn’t seen those concerns explained in full anywhere.
As multiplerecentposts have said, EAs today try to sell the obvious important and important idea of preventing existential risk using counterintuitive ideas about caring about the far future, which most people won’t buy. This is an example of how viewing these cause areas through just the lens of altruism can be damaging to those causes.
And then it damages the global poverty and animal welfare cause areas because many who might be interested in the EA ideas to do good better there get turned off by EA’s intense focus on longtermism.
The claim isn’t that the current framing of all these cause areas as effective altruism doesn’t make any sense, but that it’s confusing and sub-optimal. According to Matt Yglesias, there are already “relevant people” who agree strongly enough with this that they’re trying to drop to just using the acronym EA—but I think that’s a poor solution and I hadn’t seen those concerns explained in full anywhere.
As multiple recent posts have said, EAs today try to sell the obvious important and important idea of preventing existential risk using counterintuitive ideas about caring about the far future, which most people won’t buy. This is an example of how viewing these cause areas through just the lens of altruism can be damaging to those causes.
And then it damages the global poverty and animal welfare cause areas because many who might be interested in the EA ideas to do good better there get turned off by EA’s intense focus on longtermism.