Effective Altruism is a Big Tent
A few weeks back, we at CEA were very glad to be hosting the first of three conferences on effective altruism this month in San Francisco. Most participants we’ve spoken to found the experience inspiring. The people there were devoted to doing good, with incredible personal stories and exciting projects.
Unfortunately, the event raised concerns for some participants. Dylan Matthews wrote in Vox that although he agreed with its aims, he came away from the conference worried. We thought his article was well worth reading and that effective altruists need to take his criticisms seriously. A few of us* have thought about some of his points and would like to respond to some of the points made, as well as the idea that effective altruism has lost sight of its mission.
Picking a Cause
“In the beginning,” Matthews writes, “EA was mostly about fighting global poverty. Now it’s becoming more and more about funding computer science research to forestall an artificial intelligence–provoked apocalypse.”
Effective altruism has always been about challenging received wisdom and using evidence to find the best ways to make the world a better place. In part, this involves looking for the best ways to solve particular problems: Which charities working in this area are most effective? Would it be better for me to work for them directly, or to earn lots of money and donate it?
But that’s only half of the story. The other half is working out which problems we should be focusing on. Which problems cause (or will cause) the most harm, and which can we do the most about? With limited resources, does it make more sense to focus on researching new drugs to help people living with HIV/AIDS, on educating the millions of children who still grow up unable to read, or on preventing climate change?
The answers to these questions are enormously important: every additional life saved means a family who doesn’t lose their loved one, and a person who gets to lead the life they should be able to. But they are difficult questions, and it’s no surprise that different people have come to different conclusions. Many effective altruists continue to believe that global poverty is the most pressing problem in the world: Giving What We Can, GiveWell, The Life You Can Save and Charity Science are all broadly focused on the developing world.
Participants at EA Global in San Francisco would have heard, just before the panel discussion that Matthews focuses on, a featured presentation delivered by Jacqueline Fuller—who directs $100m a year on behalf of Google towards education, development, and renewable energy.
Other effective altruists are focused on policy change or animal welfare. Matthews mentions a talk by Nick Cooney, among a number of other speakers on the subject at the conference.
There are also some effective altruists who believe that the most pressing problems are existential risks—serious threats to human existence or prosperity. Of these, climate change is the most publicized and probably the best-researched, but it’s not the only one. Risks from pandemics are also widely discussed and artificially intelligent systems pose a more novel risk.
Concern about super-intelligent machines sounds like science fiction. It remains speculative, but huge advances have been made in AI technology in just the last few years. This does create the possibility, as Matthews explains, that at some point in the coming century there may be extremely powerful artificial systems that might pose some risk for humanity. In general, governance structures change slower than technology. So some effective altruists feel it is worth exploring how society should react to this risk now, instead of waiting for it to be urgent. Clearly, not everyone should work on this, but some think the right number of people to work on it is quite a bit above zero.
EA is, and has always been, about finding and acting on ways to do the most good—and following that through even if it takes you places you weren’t originally expecting.
Global Poverty is Not a Rounding Error
So the first response to Matthews is that yes, there are some effective altruists focused on preparing for artificial intelligence well in advance. They are in the minority, but it is worth taking their arguments seriously.
But Matthews also cautions about the language he heard used to set AI research apart from other causes, with “multiple attendees” suggesting that by comparison, they say, global poverty is a “rounding error.”
Dismissing the suffering of those living in poverty is not defensible. Helping people now matters, even when we also recognize the importance of preventing future generations from suffering. People are still dying of and suffering from easily preventable conditions. Global inequality remains enormous. Access to basic things which are important to human dignity, like education and representation, is still not universal. There are many opportunities to make the world much better, very cost-effectively, and we should take them.
Even if one cares mostly about the long-term, as GiveWell explains, helping reduce global poverty now is important. By reducing the burden of hunger or disease, we enable people to become better educated and more productive, stimulating developing economies as well as voluntary and cooperative works. The long-term effect is to reduce conflict and permanently raise the standard of living worldwide—hardly a “rounding error.”
We Can Do More
Matthews makes some other good points. Effective altruism is diverse in many ways—it boasts members around the world, across the political spectrum, and from many social backgrounds. But there are ways this could improve, and we take it very seriously. Our organisation, the Centre for Effective Altruism, has a full-time staff member dedicated to improving inclusivity, and we are looking for other opportunities to move in the right direction.
He also suggests that we’re “self-congratulatory.” There’s some truth to this, and we should fix that. It’s important that effective altruists do take some time to reflect on what has gone well as a result of their hard work so far—both in order to learn from it and to remain motivated to work harder in future. That said, we clearly still have a long road ahead and are only a small part of a global community working to make the world a better place. We have a lot to learn.
Effective altruism remains a big tent, united by a common commitment to making the biggest difference possible through evidence and effort. There will always be room in the movement for people working towards a wide range of causes, whether they are focused on alleviating suffering now or are worried about the future of mankind.
----------
*Michelle Hutchinson, Owen Cotton-Barratt, Becky Cotton-Barratt, Bernadette Young, Nick Beckstead, I, and some others all contributed to this piece—but it should be taken to represent the views of only Michelle and me since the others have not seen the final draft.
- Climate Change Is Neglected By EA by 23 May 2020 19:16 UTC; 36 points) (
- What is valuable about effective altruism? Implications for community building by 18 Jun 2017 14:49 UTC; 22 points) (
- 26 May 2020 23:08 UTC; 2 points) 's comment on Climate Change Is Neglected By EA by (
I reckon you could safely use a stronger word than “some” there! EAs can risk boxing themselves into an unfortunate rhetorical corner with our talk of picking the single best cause or marginal action. This talk can obscure the fact that plenty of those who don’t think any specific AI-focused charity is the best place for them to donate to on the margin would want to have some funding go to AI safety if they were hypothetically deciding the distribution of all the funding in the world.
That’s right. It would seem extremely unlikely that one should have a multi-billion dollar industry with no-one thinking about what happens if it succeeds at its aim.
It’s very important for EAs to recognise that there probably isn’t a single best cause (and that even if there is, the uncertainties are too big to allow us to identify it). Even if there was an identifiable best cause, it is likely to change, so it’s bad for EAs to identify too strongly with any one cause.
There’s a broader risk in focusing on marginal cost-effectiveness—that it leads to local rather than global optimisation. It’s a good heuristic, but bad to rely on too much.
Could someone maybe do something about the formatting on this post? It seems like it’s valuable to keep around and link to, but currently I find it pretty hard to read (all bold, big spaces between paragraphs, some paragraphs that are plausibly blockquotes but I can’t really tell).
See also Helen’s post re inclusion (most popular so far on the forum): http://effective-altruism.com/ea/9s/effective_altruism_is_a_question_not_an_ideology/
My reply to Matthews: http://effective-altruism.com/ea/m4/a_response_to_matthews_on_ai_risk/
Scott’s several discussions re Matthews: http://slatestarcodex.com/2015/08/15/my-id-on-defensiveness/ http://slatestarcodex.com/2015/08/13/figureground-illusions/ http://slatestarcodex.com/2015/08/12/stop-adding-zeroes/
We need to be careful with how the EA movement is perceived. If it is seen that you MUST consider focussing on existential risk to be an EA, it may (in my opinion definitely will) push away people that may care more about, say, combating poverty, animal suffering or climate change, which are still worthy causes even if one argues they are not as good as working on existential risk.
Thanks, Seb, for this thoughtful and responsible post.