This is a great post and I thank you for taking the time to write it up.
I ran an EA club at my university and ran a workshop where we covered allthephilosophical objections to Effective Altruism. All objections were fairly straightforward to address except for one which—in addressing it—seemed to upend how many participants viewed EA, given what image they thus far had of EA. That objection is: Effective Altruism is not that effective.
There is a lot to be said for this objection and I highly highly recommend anyone who calls themselves an EA to read up on it here and here. None of the other objections to EA seem to me to have nearly as much moral urgency as this one. If we call this thing we do EA and it is not E I see a moral problem. If you donate to deworming charities and have never heard of wormwars I also recommend taking a look at this which is an attempt to track the entire debacle of “deworming-isn’t-that-effective” controversy in good faith.
Disclaimer: I donate to SCI and rank it near the top of my priorities, just below AMF currently. I even donate to less certain charities like ACE’s recommendations. So I certainly don’t mean to dissuade anyone from donating in this comment. Reasoning under uncertainty is a thing and you can see thesetwo recent posts if you desire insight into how an EA might try to go about it effectively.
The take home of this though is the same as the three main points raised by OP. If it had been made clear to us from the get-go what mechanisms are at play that determine how much impact an individual has with their donation to an EA recommended charity, then this EA is not E objection would have been as innocuous as the rest. Instead, after addressing this concern and setting straight how things actually work (I still don’t completely understand it, it’s complicated) participants felt their initial exposure to EA (such as through the guide dog example and other over-simplified EA infographics that strongly imply it’s as simple and obvious as: “donation = lives saved”) contained false advertising. The words slight disillusionment comes to mind, given these were all dedicated EAs going into the workshop.
So yes, I bow down to the almighty points bestowed by OP:
many of us were overstating the point that money goes further in poor countries
many of us don’t do enough fact checking, especially before making public claims
many of us should communicate uncertainty better
Btw, Scope insensitive link does not seem to work I’m afraid (Update: Thanx for fixing!)
Overstatement seems to be selected for when 1) evaluators like Givewell are deferred-to rather than questioned, 2) you want to market that Faithful Deferrence to others.
This is a great post and I thank you for taking the time to write it up.
I ran an EA club at my university and ran a workshop where we covered all the philosophical objections to Effective Altruism. All objections were fairly straightforward to address except for one which—in addressing it—seemed to upend how many participants viewed EA, given what image they thus far had of EA. That objection is: Effective Altruism is not that effective.
There is a lot to be said for this objection and I highly highly recommend anyone who calls themselves an EA to read up on it here and here. None of the other objections to EA seem to me to have nearly as much moral urgency as this one. If we call this thing we do EA and it is not E I see a moral problem. If you donate to deworming charities and have never heard of wormwars I also recommend taking a look at this which is an attempt to track the entire debacle of “deworming-isn’t-that-effective” controversy in good faith.
Disclaimer: I donate to SCI and rank it near the top of my priorities, just below AMF currently. I even donate to less certain charities like ACE’s recommendations. So I certainly don’t mean to dissuade anyone from donating in this comment. Reasoning under uncertainty is a thing and you can see these two recent posts if you desire insight into how an EA might try to go about it effectively.
The take home of this though is the same as the three main points raised by OP. If it had been made clear to us from the get-go what mechanisms are at play that determine how much impact an individual has with their donation to an EA recommended charity, then this EA is not E objection would have been as innocuous as the rest. Instead, after addressing this concern and setting straight how things actually work (I still don’t completely understand it, it’s complicated) participants felt their initial exposure to EA (such as through the guide dog example and other over-simplified EA infographics that strongly imply it’s as simple and obvious as: “donation = lives saved”) contained false advertising. The words slight disillusionment comes to mind, given these were all dedicated EAs going into the workshop.
So yes, I bow down to the almighty points bestowed by OP:
many of us were overstating the point that money goes further in poor countries
many of us don’t do enough fact checking, especially before making public claims
many of us should communicate uncertainty better
Btw, Scope insensitive link does not seem to work I’m afraid (Update: Thanx for fixing!)
Overstatement seems to be selected for when 1) evaluators like Givewell are deferred-to rather than questioned, 2) you want to market that Faithful Deferrence to others.