If you want to disagree with effective altruism, you need to disagree one of these three claims
Effective altruism is often motivated by appealing to Singer’s pond argument.
This is good because it’s a strong, concrete and well-studied argument. However, two downsides are that (i) it associates effective altruism with international development (ii) it makes it seem like you can refute the importance of effective altruism by refuting the pond argument.
In fact, the importance of effective altruism is much more robust than the pond argument. It instead relies on what I call the “general pond argument”. If in promotion of effective altruism we focus more on the general pond argument rather than original pond argument, then we can make the case for effective altruism more strongly, and in a way that’s not so tightly associated with international development. This might work better with some audiences.
In the rest of the post, I sketch the general pond argument, explain how to use it to clarify objections to effective altruism, then suggest what this might mean for promoting the ideas.
Introducing the general pond argument
The original pond argument goes as follows:
1. If you can help others a great deal without sacrificing something of similar significance, you ought to do it. For instance, you ought to save a child drowning in a pond, even if it would ruin some expensive clothes.
2. We can help the global poor with little cost to ourselves by giving to effective charities.
And (1) and (2) imply that:
3. We ought to give our wealth to effective charities until it becomes a significant sacrifice.
One problem with bringing up this argument is that it often leads to a debate about whether (2) is true i.e. whether international aid really helps the global poor.
However, you can deny that international aid works, but still think that effective altruism is important.
Let’s call an action that benefits others a great deal with little cost to yourself “pond-like”. Effective altruism is important so long as there are some pond-like actions to be found.
Here are some examples of pond-like actions that are widely discussed in the community:
1. Donating to GiveWell-recommended and ACE-recommended charities.
2. Persuading others to make these donations.
3. Giving up factory farmed meat, and persuading others to do the same.
4. Voting in close elections, where you think one candidate would be much better than another.
5. If you have a good fit for the area, doing a wide variety of high-impact careers, such as research, earning to give and advocacy, or working at the Open Philanthropy Project.
6. Promoting effective altruism.
And many others within each problem area.
(Some of these involve more sacrifice than others, but that’s OK so long as the actions that require greater sacrifice also do more good.)
If any of the actions above are truly pond-like, then effective altruism is an important idea.
We can lay out the case more formally with the “general pond argument”:
A. If you know there are pond-like actions, and they don’t violate the ordinary rules of morality (e.g. violate rights), you ought to do them. (This is the moral claim.)[1]
B. There are some pond-like actions that are not already widely taken. (The empirical claim.)
C. We can come to know which actions are pond-like, in particular by using evidence and reason (edit: “more than people normally do”). (The epistemic claim.)
A, B and C imply that there are some pond-like actions that we ought to be taking that aren’t currently taken.
We could see the mission of effective altruism to be to identify these actions, and help them to become more widely adopted, thereby doing a lot of good.
Why are there so many pond-like actions? (and so, why is effective altruism important?)
Any of the following five observations implies there will be lots of pond-like actions. We tend to only just on the first of these, but I think all five are significant.
1. Global inequality. College graduates in developed countries are about 100 times richer than the global poor. That means these people can do about 100 times as much good if they take actions to help the global poor rather than themselves. If the benefits are 100 times larger than the sacrifice involved, then the action is pond-like. Most simple, you could do this by transferring your income to the global poor. However, you can probably find *even more* effective ways to help the global poor, such as health interventions or supporting greater international migration, which would raise the ratio of benefits to costs over 100.
2. Moral concern for animals. If you believe the wellbeing of animals is morally important, then there will probably be pond-like actions. This is because animals have no economic or political power, so are unable to protect their own interests. This will mean we should expect there to be ways to benefit lots of animals with small costs. One simple example is going vegetarian. The average American eats about 100 animals per year, almost all of which live in factory farms, so by going vegetarian you prevent 100 animals from living in terrible suffering each year for a small cost to yourself. And again, you can probably find even more effective actions than this.
3. The ability to affect the future. There will be many more people living in the future than are alive today. If you believe we should have moral concern for future generations and some of our actions can affect them, then it raises the possibility of pond-like actions. One simple way we can affect all future generations is to cause human extinction, so if there’s anything we can do today to make human extinction less likely, then there’s a good chance those actions will be pond-like.
4. The possibility of leverage. If you focus on finding the best ways to help others, you can often find ways of doing good that are higher leverage than just doing good things yourself. By “higher leverage”, I mean something like “make use of more resources than just your own”. For instance, if you think some action, A, is good, you can probably find a way to get 10 people to do A. This action is 10 times higher impact than A itself. So, even if A isn’t pond-like, there’s a good chance that 10xA is pond-like. If you think there are many opportunities for leverage, then there will be many pond-like actions. I think many opportunities for leverage exist because few people aim to have a large social impact.
5. Poor existing methods. Many current attempts to do good aren’t very strategic or evidence-based. Given this, if you apply more evidence-driven methods, you might think you can find ways to doing good that are 10 or 100 times better than what people normally focus on. Maybe normal ways of doing good aren’t pond-like, but something 10 times more effective is pond-like. So taking an evidence-driven, strategic approach could mean you find lots of pond-like actions.
I think all five of these are probably true. In brief, we live in an unintuitive world, where there are massive inequalities, and our actions have diffuse, but significant effects on others. This means our moral intuitions regularly misfire. Although it doesn’t feel like it, we’re surrounded by children drowning in ponds. And this is why effective altruism turns out to be a novel and important idea.
How not to refute the importance of effective altruism
To disagree with effective altruism, you need to disagree with one of the three claims in the general pond argument.
Most critiques of effective altruism fail to hit the mark. Some common misfires include:
1. Equating effective altruism with utilitarianism, then raising classic objections to utilitarianism. However, effective altruism actually rests on the much weaker moral claim (A), that you ought to do actions that are a great benefit to others with little cost to yourself (or even A’ just that these actions are very good, but not obligatory). In contrast, utilitarianism would say you ought to do an action that’s a major sacrifice, so long as it does slightly more good to anyone else. Utilitarianism also denies that anything matters except welfare and that it’s permissible to violate rights in pursuit of the greater good. Effective altruism doesn’t claim either of these things. For more on these objections, see Prof. Jeff McMahan’s Philosophical Critiques of Effective Altruism (download link).
2. Arguing that a specific action is not pond-like, or that effective altruists focus on the wrong pond-like actions, for instance, by criticising the effectiveness of international aid. This criticism is just a helpful contribution to effective altruism’s mission to identify the best pond-like actions. To show effective altruism is a bad idea in general, you’d need *there are no* pond-like arguments that aren’t already widely taken. We wrote more about these kinds of objections here.
3. Saying that effective altruists think you should only support charities that have randomised controlled trial evidence behind them. In fact, we just rely on the much weaker claim (3) that there are some ways it’s possible identify pond-like actions, of which randomised controlled trials are just one tool. Indeed, we often think it’s more effective to focus on actions with a small probability of a large payoff, rather than robust evidence, as we wrote about here.
What types of criticism might hit the mark?
One option is to reject the moral claim: deny that we ought to help others even if it would be little sacrifice to ourselves. This is an unpleasant route to go down, since it would probably mean accepting that there’s nothing wrong with letting a child in a pond drown in front of you.
Unless, that is, you can show there’s an important moral difference between the child drowning in the pond and all the other pondlike actions that the effective altruism community supports. However, this is much harder than just showing there’s a moral difference between specific pond-like action (such as donating to effective international health charities) as saving the child in the pond.
Moreover, even if you succeed in showing that the new pond-like actions aren’t morally obligatory, they’d still be good things to do (supererogatory). You would have shown that being an effective altruist isn’t required by morality, but it’s still commendable.
The second option is to deny that there are any new pond-like actions that we can come to know. Again, this is relatively easy to do in the case of a single pond-like action, but it’s much harder to show that *no* new knowable pond-like actions exist at all.
You’d have to show that:
1. *None* of the actions listed above are pond-like.
2. No further pond-like actions will be discovered.
So far no critic of effective altruism has shown anything like this.
The third, and most promising option in my mind, is to accept that effective altruism as an idea is correct—accepting the general pond argument—but deny that effective altruism as a movement will succeed in doing a lot of good. Perhaps it’s just too hard to persuade people to do the right thing, or the current leaders of the movement will fail, or we’re bad at working out which actions are pond-like. Or perhaps there’s some much more important way of doing good that we should do instead.
Conclusion, and some potential lessons for promoting effective altruism
I don’t propose we should literally lead with the general pond argument, since it’s far too abstract. However, it seems useful to have in the back of your mind while promoting the ideas.
In particular, when motivating effective altruism, I suspect it would be useful to discuss a wider range of pond-like actions than just donating to effective international health charities.
If we can communicate that idea that if *any* pond-like actions exist, then effective altruism is an important idea, we’ll be making the case in a much more robust way than if we only focus on a couple of specific actions.
Moreover, we’ll be making sure that critics focus on the core of ideas, helping us better learn and do more good.
Notes:
[1] If you prefer to avoid making effective altruism about moral obligation, then you can replace (A) with something like (A’): “if you know there are pond-like actions, it’s a very good thing if you take them” (i.e. making pond-like actions supererogatory rather than obligatory).
Some (fairly minor) points, given I generally agree:
1) A critic taking the second option doesn’t need to say “We will never discover any pond-like acts”, but something like “The likelihood of such a discovery is sufficiently low (either because in fact such acts are rare or because whether or not they are there we cannot expect good access to them)”. The bare possibility we might discover a pond-like act in the future doesn’t make EA worth one’s attention.
2) I am hesitant to make the move that all criticism of EA ‘as practiced’ is inapposite. For caricature, if most EAs decided they’d form a spree-killing ring, a reply along the lines of that this mere concretum bears no relevance to the platonic ideal of EA (alas poorly instantiated in this case) doesn’t seem to cut it. If EA is generally going wrong so badly its worse than the counterfactual, this seems entirely fair to criticise (I agree this looks unlikely by the lights of any sensible moral view).
It also seems fair to criticise EA if a substantial minority are doing something you deem stupid (e.g. “Look at those muppets who think giving to MIRI 10^ridiculous times more important than stopping kids starving”). If I think some significant subset of people who believe X are doing (because of said belief) something silly or objectionable, it seems fair to have it as a black mark against X, even if it doesn’t mean I think it makes them bad all things considered: “Yeah EA is good when it gets people giving more to charity—it’s a shame it seems to lead people up the garden path to believe ridiculous stuff like killer robots and what-not”. (N.B. I picked AI risk as it hits the ‘unsweet spot’ of being fairly popular in EA yet pretty outlandish outside it—these are not criticisms I endorse myself).
1) I agree—I was speaking loosely.
2) I may have misunderstood, but I think these would fall under the third way of criticising EA I mentioned:
But such a critique also falls under the second kind of critique that you said would be a “misfire”. Perhaps you meant that it’s a misfire only if the critic is trying to argue against ideal EA, but in my experience most critics are not trying to do that, they’re arguing against the EA movement.
I’d like to steelman a slightly more nuanced criticism of Effective Altruism. It’s one that, as Effective Altruists, we might tend to dismiss (as do I), but non-EAs see it as a valid criticism, and that matters.
Despite efforts, many still see Effective Altruism as missing the underlying causes of major problems, like poverty. Because EA has tended to focus on what many call ‘working within the system’, a lot of people assume that is what EA explicitly promotes. If I thought there was a movement which said something like, ‘you can solve all the world’s problems by donating enough’, I might have reservations too. They worry that EA does not pay enough credence to the value of building community and social ties.
Of course, articles like this (https://80000hours.org/2015/07/effective-altruists-love-systemic-change/) have been written, but it seems this is still being overlooked. I’m not arguing we should necessarily spend more time trying to convince people that EAs love systemic change, but it’s important to recognise that many people have, what sounds to them, like totally rational criticisms.
Take this criticism (https://probonoaustralia.com.au/news/2015/07/why-peter-singer-is-wrong-about-effective-altruism/ - which I responded to here: https://probonoaustralia.com.au/news/2016/09/effective-altruism-changing-think-charity/). Even after addressing the author’s concerns about EA focusing entirely on donating, he still contacted me with concerns that EA is going to miss the unintended consequences of reducing community ties. I disagree with the claim, but this makes sense given his understanding of EA.
I read through your article, but let me see if I can strengthen the claim that charities promoted by effective altruism do not actually make systematic change. Remember, effective altruists should care about the outcomes of their work, not the intentions. It does not matter if effective altruists love systematic change, if that change fails to occur, the actions they did are not in the spirit of effective altruism. Simply put, charities such as the Against Malaria Foundation harm economic growth, limit freedom, and instill dependency, all while attempting to stop a disease which kills about as many people every year as the flu. Here’s the full video
The link to your argument regarding international aid is broken, so I’ll post this here. While I am all for effective altruism in principle, the claim that the particular aid organizations that Give Well, and other promote do the most good is patently false. I live and work in West Africa and I see every day the devastating economic harm that organizations like the Against Malaria Foundation wreak on communities. Effective Altruism as a movement has failed to actually be effective because it promote charities that do more harm than good. Here’s a video as to why: Stop Giving Well
Make a series of videos about that instead then if it’s so prevalent. It would serve to undermine GiveWell far more and strengthen your credibility.
Your video against GiveWell does not address or debunk any of GiveWell’s evidence. It’s a philosophical treatise on GiveWell’s methods not an evidence-based treatise. Arguing by analogy based on your own experience is not evidence. I’ve been robbed 3 times living in Vancouver and yet zero times in Africa, despite living in Namibia/South Africa for most of my life. This does not however entail that Vancouver is more dangerous. I in fact have near-zero evidence to back up the claim that Vancouver is more dangerous
All of your methodology objections (and far stronger anti-EA arguments) were systematically raised in Iason Gabriel’s piece on criticisms of effective altruism. And all of these criticisms were systematically responded to and found lacking by Halstead et al’s defense paper
I’d highly recommend reading both of these. They are both pretty bad ass.