EffectiveAltruism.org’s Introduction to Effective Altruism allocates most of its words to what’s effectively an explanation of global poverty EA. A focus on empirical validation, explicit measurement and quantification, and power inequality between the developed and developing world. The Playpump example figures prominently. This would make no sense if I were trying to persuade someone to support animal charity EA or x-risk EA.
Other EA focus areas that imply very different methods are mentioned, but not in a way that makes it clear how EAs ended up there.
If you click “Donate Effectively,” you end up on the EA Funds site, which presents the four Fund categories as generic products you might want to allocate a portfolio between. Two of the four products are in effect just letting Nick Beckstead do what he thinks is sensible with the money, which as I’ve said above is a good idea but a very large leap from the anti-Playpump pitch. “Trust friendly, sensible-seeming agents and empower them to do what they think is sensible” is a very, very different method than “check everything because it’s easy to spend money on nice-sounding things of no value.”
The GWWC site and Facebook page have a similar dynamic. I mentioned in this post that the page What We Can Achieve mainly references global poverty (though I’ve been advised that this is an old page pending an update). The GWWC Facebook page seems like it’s mostly global poverty stuff, and some promotion of other CEA brands.
It’s very plausible to me that in-person EA groups often don’t have this problem because individuals don’t feel a moral obligation to give the most generically effective pitch for EA, but instead just talk about what they personally care about and find interesting.
EffectiveAltruism.org’s Introduction to Effective Altruism allocates most of its words to what’s effectively an explanation of global poverty EA. A focus on empirical validation, explicit measurement and quantification, and power inequality between the developed and developing world. The Playpump example figures prominently. This would make no sense if I were trying to persuade someone to support animal charity EA or x-risk EA.
I think ‘many methods of doing good fail’ has wide applications outside of Global Poverty, but I acknowledge the wider point you’re making.
Other EA focus areas that imply very different methods are mentioned, but not in a way that makes it clear how EAs ended up there.
This is a problem I definitely worry about. There was a recent post by 80,000 hours (which annoyingly I now can’t find) describing how their founders’ approaches to doing good have evolved and updated over the years. Is that something you’d like to see more of?
It’s very plausible to me that in-person EA groups often don’t have this problem because individuals don’t feel a moral obligation to give the most generically effective pitch for EA, but instead just talk about what they personally care about and find interesting.
This is a true dynamic, but to be specific about one of the examples I had in mind: A little before your post was written I was helping someone craft a general ‘intro to EA’ that they would give at a local event, and we both agreed to make the heterogeneous nature of the movement central to the mini speech without even discussing it. The discussion we had was more about ‘which causes and which methods of doing good should we list given limited time’, rather than ‘which cause/method would provide the most generically effective pitch’.
We didn’t want to do the latter for the reason I already gave; coming up with a great 5-minute poverty pitch is worthless-to-negative if the next person a newcomer talks to is entirely focused on AI, and with a diversity of cause areas represented among the ‘core’ EAs in the room that was a very real risk.
There was a recent post by 80,000 hours (which annoyingly I now can’t find) describing how their founders’ approaches to doing good have evolved and updated over the years. Is that something you’d like to see more of?
Yes! More clear descriptions of how people have changed their mind would be great. I think it’s especially important to be able to identify which things we’d hoped would go well but didn’t pan out—and then go back and make sure we’re not still implicitly pitching that hope.
EffectiveAltruism.org’s Introduction to Effective Altruism allocates most of its words to what’s effectively an explanation of global poverty EA. A focus on empirical validation, explicit measurement and quantification, and power inequality between the developed and developing world. The Playpump example figures prominently. This would make no sense if I were trying to persuade someone to support animal charity EA or x-risk EA.
Other EA focus areas that imply very different methods are mentioned, but not in a way that makes it clear how EAs ended up there.
If you click “Donate Effectively,” you end up on the EA Funds site, which presents the four Fund categories as generic products you might want to allocate a portfolio between. Two of the four products are in effect just letting Nick Beckstead do what he thinks is sensible with the money, which as I’ve said above is a good idea but a very large leap from the anti-Playpump pitch. “Trust friendly, sensible-seeming agents and empower them to do what they think is sensible” is a very, very different method than “check everything because it’s easy to spend money on nice-sounding things of no value.”
The GWWC site and Facebook page have a similar dynamic. I mentioned in this post that the page What We Can Achieve mainly references global poverty (though I’ve been advised that this is an old page pending an update). The GWWC Facebook page seems like it’s mostly global poverty stuff, and some promotion of other CEA brands.
It’s very plausible to me that in-person EA groups often don’t have this problem because individuals don’t feel a moral obligation to give the most generically effective pitch for EA, but instead just talk about what they personally care about and find interesting.
Thanks for digging up those examples.
I think ‘many methods of doing good fail’ has wide applications outside of Global Poverty, but I acknowledge the wider point you’re making.
This is a problem I definitely worry about. There was a recent post by 80,000 hours (which annoyingly I now can’t find) describing how their founders’ approaches to doing good have evolved and updated over the years. Is that something you’d like to see more of?
This is a true dynamic, but to be specific about one of the examples I had in mind: A little before your post was written I was helping someone craft a general ‘intro to EA’ that they would give at a local event, and we both agreed to make the heterogeneous nature of the movement central to the mini speech without even discussing it. The discussion we had was more about ‘which causes and which methods of doing good should we list given limited time’, rather than ‘which cause/method would provide the most generically effective pitch’.
We didn’t want to do the latter for the reason I already gave; coming up with a great 5-minute poverty pitch is worthless-to-negative if the next person a newcomer talks to is entirely focused on AI, and with a diversity of cause areas represented among the ‘core’ EAs in the room that was a very real risk.
Yes! More clear descriptions of how people have changed their mind would be great. I think it’s especially important to be able to identify which things we’d hoped would go well but didn’t pan out—and then go back and make sure we’re not still implicitly pitching that hope.
I found the post, was struggling before because it’s actually part of their career guide rather than a blog post.
Thanks! On a first read, this seems pretty clear and much more like the sort of thing I’d hope to see in introductory material.