EA covered on “Stuff You Should Know” Podcast
Stuff You Should Know—hosted by Josh Clark and Chuck Bryant from HowStuffWorks.com—is regularly a top ten downloaded podcast and has over one million weekly downloads. Today they posted the episode called “How Effective Altruism Works” with this description:
A branch of philanthropy led by philosophers is dedicated to finding the most impactful ways to help humans survive and thrive. Anyone can find that agreeable, but it can be tough to hear it also means your donations to local charities are kind of a waste.
The episode is 54 minutes long with introduction, breaks, and listener mail taking up 12 minutes. Listening at 1.5 speed it would take about 30 minutes. Below is an ~7 minute read summarizing their conversation with timestamps.
Note: I lightly edited direct quotes for clarity
___________________________________________________
3:45 Introduction. They reference their recent Short Stuff: Charity Tips episode (which heavily featured GiveWell), but note how some of their advice from that episode conflicts with what Chuck says EA recommends: “The only way you should give is just by coldly calculating what would help a human the most on planet earth.” Josh notes, “It is a very polarizing idea if you just take it at its bare bones…if everybody would just move past the most extreme parts of it and just take EA at its most middle ground…it would be really difficult to disagree with the ideas behind it.” Josh says it’s the associations with Peter Singer and “silicon valley billionaires” that get people all riled up.
6:13 A few things that “EA is,” according to Chuck:
1) A lot of good can be done with money.
2) If you can provide for yourself, you should probably be giving to charity.
3) You can literally save human lives.
Josh defines Quality Adjusted Life Years and highlights that “basically everybody living in the US can afford to give 10% of their income and forgo some clothes or some cars to help other people literally survive. So right off the bat we’ve reached levels of discomfort for the average person that are really tough to deal with. That’s the first challenge that EAs have to do: Tamp down the overwhelming sense of guilt and responsibility and shame at not doing that.”
8:38 History and main organizations. EA “took hold in 2010.” They reference the Center for Effective Altruism and say Toby Ord and Will Macaskill founded Giving What We Can to start the movement. Then they mention Benjamin Todd and 80,000 Hours, The Life You Can Save, and Animal Charity Evaluators. Josh speaks very highly of Toby Ord based on how Ord’s work influenced his 2018 podcast project The End of the World with Josh Clark about existential risks.
11:20 Will Macaskill’s core commitments of EA:
1) Maximizing the good (“Which we can all pretty much get on board with”)
2) Aligning your ideas/contributions to science with evidence rather than your heart (“A tough one for people to swallow.”)
3) Welfarism.
4) Impartiality (“That’s a tough one…harder for people to swallow than science alignment.”)
They note that out of $470B Americans donate annually, only $25.9B goes outside of America. Chuck says that the idea of EA is to “shatter your way of thinking about trying to help the people in your city or state or country and to look at every human life as having equal value. And not even human life, but every life!”
Josh adds, “If #4 holds, then, from a strict EA perspective you are wasting your money if you are an American donating it in America. A dollar can do exponentially more in poverty stricken parts of the world than it can in the United States.” He laments, “It’s just a huge jagged pill that they are asking people to swallow, but if you can step back from it, what they are ultimately saying is ‘Look man, you want to do the most good with your charitable donations? Here’s how you do it.’”
Chuck: “Do you want to feel good about it, or do you really want to do the good?”
15:57 − 17:55 Break #1
17:56: Other big players in EA. They provide a brief description of GiveWell and Open Philanthropy: “Big donors and big believers in the cause.” They mention Jeremy Bentham and John Stuart Mill and introduce Peter Singer as “sort of controversial.” About Singer’s shallow pond essay, Josh says (sincerely): “It’s really good! If you want to feel like a total piece of garbage for not doing enough in the world, read that…But if you hear him out, [his philosophical argument] is pretty sound.” They then provide some examples of repugnant conclusions of strict utilitarianism.
22:23 The problem with focusing on strict utilitarianism as the foundation of EA. Josh says, “If that’s what you’re focusing on and you’re equating EA’s desire to get the most bang for your donation buck to murdering somebody to harvest their organs to save five people, you’ve just completely lost your way. The fact that these types of arguments are trained on this charitable movement is totally unfair.” Chuck agrees that Singer’s most controversial ideas have nothing to do with EA and Josh adds that “[Singer] makes an easily obtainable straw man that people like to pick on.”
24:24: Numbers and giving pledges. Chuck: In 2020, $1000 per American of charitable contributions which is “not that much money if you think about it.” They highlight several pledges that EA endorses: Giving What We Can, Founders’ Pledge, Try Giving and note that only 8-10 thousand people have taken these pledges, but “most of the people involved in the movement are high-earning, extremely educated people.”
25:47 80,000 Hours and Careers:
They describe two paths for careers as an EA: 1) Have a job where you can make as much money as you possibly can and then donate as much as you comfortably can, or 2) Figure out something you really love, but then adjust it to have the most impact possible. Chuck notes that 80,000 qualifies #1, which they define as earning to give: “Don’t take a job that causes a lot of harm, being happy is part of being productive, and you don’t have to go grind it out at a job you hate just so you can make a lot of money and give it away…Do a job where you have talent: policymaking, media…for example, we can leverage our voice and mobilize people!” They note that because of the audience they have built they have a great opportunity to do good with the topics they choose. They discuss an example of a woman who wanted to become a doctor in Australia, but instead decided to go into epidemiology to help get vaccines out faster to improve the world more effectively.
30:03 Comparison to other charity evaluators. Charity Navigator and Charity Watch focus a lot on overhead vs. program spending, but EA “wants to see data and scientific measurables on how much return you are getting on that dollar.” Josh says, “It is very expensive to run what EAs consider the gold standard: randomized control trials…If you can get that kind of evidence, then you can get these EA’s dollars, and there are a lot of dollars coming from that group even though it is relatively small.” Chuck: “Maybe [EAs] think [RCT data] speaks to people more?” Josh: “Well, it speaks to them!”
33:38: Problems with and criticisms of relying on data:
1) There is a lot you can’t quantify in terms like that. They provide two examples: A museum is saving no lives, but that doesn’t mean they aren’t enriching or improving lives, and there is no RCT on the 1963 March on Washington that helped solidify the Civil Rights Movement. 2) RCTs are not always reproducible in different places.
Chuck: “You get why this is such a divisive thing and so hard for people, because people give with their hearts generally. They give to causes they find personal to them…the heart of philanthropy has always been the heart. It’s a hard sell for EA to say ‘I’m sorry, you have to cut that out of there.’”
35:33 Animals. Chuck says, “people generally give to dogs and cats and stuff like that. Great organizations that do great work, but the concentration from the EA perspective are factory farmed animals…That’s what we should be concentrating on because of the massive scale. To try and do the most good, you would look at where there are the most animals, and sadly they are on farms.”
36:46 On EA being a tough sell:
Josh: “This is where it makes sense to just maintain a certain amount of common sense. If you really want to maximize your money, go look at the EA sites, go check out 80,000 Hours, get into this and actually do that. But there is no one who is saying ‘If you give one dollar to that local symphony that you love you’re a sucker, you’re a chump, you’re an idiot.’ Nobody is saying that. So maybe it doesn’t have to be all or nothing.”
Quoting MacAskill’s similar defense: “EA does not require that I always sacrifice my own interests for the interests of others.”
Josh says EA’s really believe, “Yeah we are philosophers, but we can also think like normal human beings too. We’re trying to take this philosophical view, based in science and based in evidence, and try to direct money to get the biggest impact.”
38:42 − 40:59 Break #2
40:59 Longtermism:
Chuck provides a definition: “Hey let’s not just think about helping people now. If we really want to maximize impact to help more people, we need to think about the future, because there will be a lot more people in the future to save.” Josh says that philosophers argue that future generations could have vastly better lives due to technological advances so “we should be sacrificing our own stuff now for the benefit of these generations and generations and generations of humans to come that vastly outnumber the total number of humans who have ever lived.”
Chuck clarifies that “they aren’t just talking about climate change and that kind of risk. They dabble in AI and stuff like that.”
Josh summarizes: “A lot of these guys are dedicating their careers to existential risk because they have decided that that is the greatest threat to the future that would cut out any possible of those quadrillion lives.”
43:50 Money in EA:
They say that $46B is committed to EA going forward: “A lot of rich people and tech. people are backing this thing” and they are “trying to raise awareness to get more and more regular people on board.” They note that $3500- $4500 per year directed to the right places can save a child each year.
Returning to the analogy of the burning building or shallow pond: “You can do that, you can save a kid a year or more every year for the rest of your life…Again, nobody in the EA movement is saying that you should personally sacrifice to cough up that $4500…nobody is being flippant about the idea that $4500 is not that much.”
Chuck about talking to his wife about EA when they discuss their giving: “She would be like: ‘Get out of my face with that.’” Josh responds: “But you could be like, ‘Well how about we do both?’”
Summarizing EA, Chuck says, “It’s a numbers and data game that makes it tough for a lot of people to swallow” and Josh says, “It’s anti-sentimentalism in the service of saving the most lives possible.”
48:50 Brief call to action. Take some time and examine if you could give to some of these charities!
49:34 - end Listener mail, unrelated to this episode.
This podcast is the #1 thing I’ve heard to date that does the best job of explaining EA to someone outside of the movement. I’m going to share this show with people in the future.
Great summary, thanks for taking the time to write it up.
Would have been nice if they could have spent more time on examples of longtermist causes, or devoting career to high impact projects (not simply what you love, made slightly better) but given time constraints it was a strong introduction and done in a way that was welcoming and encouraging to take the next step to learn/do more.
Thanks for this writeup!
Josh Clark also did a podcast series on x-risk called the End of the World. It’s very good! Almost everyone he quotes is from FHI and it’s very aligned with EA thinking on x-risk.