The claim isn’t that the current framing of all these cause areas as effective altruism doesn’t make any sense, but that it’s confusing and sub-optimal. According to Matt Yglesias, there are already “relevant people” who agree strongly enough with this that they’re trying to drop to just using the acronym EA—but I think that’s a poor solution and I hadn’t seen those concerns explained in full anywhere.
As multiple recent posts have said, EAs today try to sell the obvious important and important idea of preventing existential risk using counterintuitive ideas about caring about the far future, which most people won’t buy. This is an example of how viewing these cause areas through just the lens of altruism can be damaging to those causes.
And then it damages the global poverty and animal welfare cause areas because many who might be interested in the EA ideas to do good better there get turned off by EA’s intense focus on longtermism.
Hi Sindy, thanks for the kind words! Really cool to hear you’ve been looking into doing that, and I’d be interested in hearing more. And of course you’re more than welcome to reach out if you have any questions.
I can’t speak for everyone involved, but off the top of my head, my rough strategy is something like:
Get more people to hear about EA. Last year, we only managed to get invites out to ~10% of the company, so there’s lots more to do here;
As there is more interest and awareness among employees, work with the company to incorporate EA principles/charities into the official Give campaign.
Our main metrics today are simply site visits, people tuning into our talks, and feedback we receive. Donations to/through GiveWell from Microsoft is something we could maybe track if they are willing to share that information, but that’s not a conversation we’ve had yet.
Bill Gates has stayed away from mixing his Foundation work with Microsoft, as far as I can tell. Our team’s never talked about reaching out to him for support, but maybe we should …