Effective altruism has seen much welcome criticism that has helped it refine its strategies for determining how to reach its goal of doing the most good—but it has also seen some criticism that is fallacious. In the following we want to correct some misconceptions that we’ve become aware of so far.
If Everyone Did That (“Kantian fallacy”)
Misconception
“If we all followed such a ridiculous approach” as effective altruism, then all worthwhile causes outside “global health and nutrition” would cease to receive funding.1
Short Answers
Top effective charities have limited room for more funding. At some point they’ll have absorbed so much money that additional donations will do much less good, so other charities will become top charities.
If only 0.03% of the yearly donations only in the US were shifted to the top effective charities we know of, they would have no room for more funding.
Once that point is reached, doing the most good will get slightly more expensive or slightly more risky, and all the other previously less effective or less proven interventions would successively get their funding.
But yes, “funding for the arts” would have to wait until deadly diseases and extreme poverty are sufficiently under control.
Long Answers
In evaluating interventions, charity prioritization typically relies on the criteria of tractability, scalability, and neglectedness. The last two of these turn charity prioritization into an anti-inductive system, where some arguments along the lines of the categorical imperative become inapplicable: You recommend an underfunded intervention, then people follow your recommendation and donate to it, then it reaches its limits in scale, and finally you have to withdraw your recommendation as it is no longer underfunded.
Imagine you are organizing a large two-day effective altruism convention. There are several hotels close to the venue, one of which is very well known and soon fully booked. Panicked attendees keep asking you what they should do, so you call the other hotels in the vicinity. It turns out there is one that is even closer to the venue with scores of rooms left. So you send out a tweet and recommend that people check in there. They do, and promptly the hotel is also fully booked, and you have to do another round of telephone calls and update your recommendation. But it is fallacious to argue that your first recommendation, or the very act of making it, was wrong to begin with just because if everyone follows it, it’s no longer valid. That’s in the nature of the beast.
Because the “if everyone did that” argument so common, let’s give it a name: “Kantian fallacy.” The “buy low, sell high” rule of the stock market is not wrong just because if everyone bought low, the price would not be low, and if everyone sold high, the price would not be high. Advising against the overused typeface Papyrus is not wrong just because if no one used it, it would no longer be overused. Surprising someone with a creative present is not wrong just because if everyone made the same present, it would not be surprising.
The last analogy is less fitting than the previous ones, because we don’t actually want good interventions to be underfunded. When all the governments and foundations see how great deworming is and allocate so much funding to it that hardly a worm survives, then any recommendation for more donations for deworming has to be withdrawn in favor of more neglected interventions—but that’s a reason to party!
So what would recommendations look like when malaria, schistosomiasis, and poverty are eradicated or eliminated to the extent that other interventions become superior? Or what would happen when other bottlenecks thwart further effective donations in these areas?
That future is already sending its sunny rays into the past as foundations like Good Ventures and the Gates Foundation already have much more funding available than the currently known top charities could absorb. What happens is that doing good either becomes more expensive (when you trade off cost-effectiveness for scalability) or more risky (when you trade off certainty for other qualities). The latter is the more interesting and encouraging scenario. More on that in the next section.
Exclusive Focus on Interventions That are Easy to Study
Misconception
“I mentioned bed nets because that is mostly what Effective Altruism amounts to: It measures what is immediately and visibly measurable, a problem known as the Streetlight Effect.”2
“GiveWell has a particular fixation with global health and nutrition charities. It at least implicitly recommends that one should support charities only in those cause areas.”1
Short Answers
Empirically, more effective altruists are eager to invest into highly speculative, high risk–high return interventions than prefer the safe investments of proven and repeatable medical interventions.
This eagerness to accept financial and personal risks to effect superior positive impact has led to a flourishing culture of metacharities, prioritization research, medical research, political research and advocacy, research on global catastrophic risk, and much more within the movement.
GiveWell in particular has long been been eager to expand to less easy-to-study interventions, so that it has been investigating (under the name Open Philanthropy Project) a wide range of global catastrophic risks, political causes, and opportunities for research funding since before effective altruism had a name.
Long Answers
Where the last answer required a new name for a fancy fallacy, this one is simply called “straw man.”
As a pioneer in the field of charity prioritization, GiveWell had a herculean task ahead of it and very limited resources in terms of money, time, and research analysts. (This was years before effective altruism had consolidated into a movement.) Since funneling more donations to above-average charities is already better than the status quo, the team quickly learned that as a starting point they had to focus narrowly on cause areas marked by extreme suffering, interventions with solid track-records of cost-effective and scalable implementation, and charities with the transparency and commitment to self-evaluation that would enable GiveWell to assess them. As it happens, these combinations were mostly found in the cause areas of disease and poverty. This decision was one of necessity at the time, but soon later, GiveWell managed to scale up its operations significantly, so that these restrictions no longer applied.
Some of the best and most cost-effective giving opportunities may well lie in areas or involve interventions that are harder to study. Hence GiveWell has been investigating these under the brand of the Open Philanthropy Project (initially known as “GiveWell Labs”) since 2011. (That’s still before effective altruism had a name or called itself a movement.) Much scientific research promises great cost-effectiveness; so do some interventions to avert global catastrophic risks and to solve political problems. Doing the most good may well mean investing somewhere in one of these areas—where exactly, Open Phil has set out to find out.
In 2012 the effective altruism movement got its name and consequently consolidated its efforts at doing the most good. Nowhere in the agenda of the movement it said that effective interventions needed to be easy to study or quantify. In fact opinions and preferences on what “the most good” means in practice vary (yet some are shared by almost everyone). According to a 2014 survey, about 71% of the EAs in the survey sample were interested in supporting poverty-related causes, but almost 76% were interested in metacharity including rationality and prioritization research, which are not easy to quantify. About one third to fourth were interested in each of antispeciesism, environmentalism, global catastrophic risk, political causes, and (nonexistential) far future concerns, most of which are hard to study. There is by no means an unwarranted bias toward interventions that are easy to study; if anything, there’s a surprising tendency towards speculative, high risk–high return interventions.
Finally, GiveWell not only doesn’t “implicitly recommends that one should support charities only in [the cause areas of global health and nutrition]” but (1) recommends a charity outside these areas, (2) writes on every charity review that did not lead to a recommendations that the “write-up should not be taken as a ‘negative rating’ of the charity” (emphasis in original), and (3) gives reasons for why a philanthropist may legitimately choose not to donate to their recommended charities right on their Top Charities page.
Consequentialism and Utilitarianism
Misconception
Effective altruism depends on a utilitarian or consequentialist morality as implied in statements like, “not that [reducing corruption in the police force] could meet [effective altruism’s] utilitarian criteria for review.”2
Short Answers
What’s so bad about wanting to maximize happiness and minimize suffering in the world?
But there are also effective altruists with other moral systems, and effective altruism seems to follow from them as well.
Long Answers
Admittedly most effective altruists are utilitarians or consequentialists. If you want to maximize happiness and minimize suffering in the world (or maximize informed preference satisfaction), then it’s clear how effective altruism follows forcibly.
But how about deontology? Take Rawls (figures courtesy of UN, WHO, and World Bank):
More than one in ten people don’t have access to safe drinking water.
More than one in ten people suffer extreme hunger.
More than one in nine people live in slums.
Almost half the world’s population are at risk of malaria.
Almost half the world’s population lives on less than the buying power of $2.50 per day.
…
You have limited resources.
Imagine you’re in the original position behind the veil of ignorance and have to allocate our limited resources. Surely you’d make an admirable effective altruist.
This is beside the point, but a less corrupt police force will provide greater safety to the population and enjoy greater trust in return. The rich will no longer have recourse to bribing the police, so that poorer people are in a better position to trade and negotiate with them. The positive marginal impact on the happiness of the poor is likely to be greater than the marginal negative impact on the happiness of the rich. So there’s one of countless utilitarian cases for fighting corruption.
Top-Down and Elitist
Misconception
“The defective altruism distribution plan ‘requires a level of expertise that few individuals have.’ Thus, over time, we would require a very centralized and top-down approach to marshal and manage social investment and charitable giving decisions in a manner acceptable to the proponents of this approach.”1
Short Answers
That this is the case is a central grievance of the charity market, which effective altruism tries to remedy and without which the movement might not even be necessary.
Long Answers
It is one of the unfortunate truisms of the human condition that no market is perfect, but the charity market is particularly and abysmally imperfect. If someone wants to buy a solid state drive they might check, among other things, the price per gigabyte. $.96 per gigabyte? Rather expensive. $.38 per gigabyte? Wow, what a bargain! When people want to invest into a company, they check out the company’s earning over the past years, compare them to the stock price, and decide whether it’s a bargain or usury. Or if you have a headache, do you buy a homeopathic remedy that does nothing for $20 or Aspirin for $5?
I wasn’t there when it happened, but I imagine when the first effective altruists wanted to donate they called charities and were like “Hi, I like what you do and want to invest into your program. Could you give me your latest impact figures?” I imagine the responses ranged from “Our what?” over “You’re the first to ever ask for that” to “We have no idea.”
When the charities that run the programs don’t even know if they do anything good or anything at all in proportion to their cost, then how are donors supposed to find out? They would have to draw on the research of experts in the field and, to some extent, would have to become experts themselves.
Prioritization organizations want to change that. They flaunt a pot of money promised to the charities that make the best case for being highly effective. That way they incentivize transparency, self-evaluation, and optimization. Eventually, we hope, this will encourage a charity market that makes it much easier for everyone to recognize the charities with the most “bang for the buck.”
Common Misconceptions about Effective Altruism
In reaction to a recent article that straw-manned effective altruism seemingly without intent, I decided to write an article on some common misconception about effective altuism, and Tom Ash started two corresponding wiki pages, one on “Common objections to effective altruism” and one on “Common objections to earning to give.” I’ve copied this article into the first. If you have anything to add or correct, you’re invited to contribute it there, so that well-meaning journalists can more easily avoid such errors.
Effective altruism has seen much welcome criticism that has helped it refine its strategies for determining how to reach its goal of doing the most good—but it has also seen some criticism that is fallacious. In the following we want to correct some misconceptions that we’ve become aware of so far.
If Everyone Did That (“Kantian fallacy”)
Misconception
“If we all followed such a ridiculous approach” as effective altruism, then all worthwhile causes outside “global health and nutrition” would cease to receive funding.1
Short Answers
Top effective charities have limited room for more funding. At some point they’ll have absorbed so much money that additional donations will do much less good, so other charities will become top charities.
If only 0.03% of the yearly donations only in the US were shifted to the top effective charities we know of, they would have no room for more funding.
Once that point is reached, doing the most good will get slightly more expensive or slightly more risky, and all the other previously less effective or less proven interventions would successively get their funding.
But yes, “funding for the arts” would have to wait until deadly diseases and extreme poverty are sufficiently under control.
Long Answers
In evaluating interventions, charity prioritization typically relies on the criteria of tractability, scalability, and neglectedness. The last two of these turn charity prioritization into an anti-inductive system, where some arguments along the lines of the categorical imperative become inapplicable: You recommend an underfunded intervention, then people follow your recommendation and donate to it, then it reaches its limits in scale, and finally you have to withdraw your recommendation as it is no longer underfunded.
Imagine you are organizing a large two-day effective altruism convention. There are several hotels close to the venue, one of which is very well known and soon fully booked. Panicked attendees keep asking you what they should do, so you call the other hotels in the vicinity. It turns out there is one that is even closer to the venue with scores of rooms left. So you send out a tweet and recommend that people check in there. They do, and promptly the hotel is also fully booked, and you have to do another round of telephone calls and update your recommendation. But it is fallacious to argue that your first recommendation, or the very act of making it, was wrong to begin with just because if everyone follows it, it’s no longer valid. That’s in the nature of the beast.
Because the “if everyone did that” argument so common, let’s give it a name: “Kantian fallacy.” The “buy low, sell high” rule of the stock market is not wrong just because if everyone bought low, the price would not be low, and if everyone sold high, the price would not be high. Advising against the overused typeface Papyrus is not wrong just because if no one used it, it would no longer be overused. Surprising someone with a creative present is not wrong just because if everyone made the same present, it would not be surprising.
The last analogy is less fitting than the previous ones, because we don’t actually want good interventions to be underfunded. When all the governments and foundations see how great deworming is and allocate so much funding to it that hardly a worm survives, then any recommendation for more donations for deworming has to be withdrawn in favor of more neglected interventions—but that’s a reason to party!
So what would recommendations look like when malaria, schistosomiasis, and poverty are eradicated or eliminated to the extent that other interventions become superior? Or what would happen when other bottlenecks thwart further effective donations in these areas?
That future is already sending its sunny rays into the past as foundations like Good Ventures and the Gates Foundation already have much more funding available than the currently known top charities could absorb. What happens is that doing good either becomes more expensive (when you trade off cost-effectiveness for scalability) or more risky (when you trade off certainty for other qualities). The latter is the more interesting and encouraging scenario. More on that in the next section.
Exclusive Focus on Interventions That are Easy to Study
Misconception
“I mentioned bed nets because that is mostly what Effective Altruism amounts to: It measures what is immediately and visibly measurable, a problem known as the Streetlight Effect.”2
“GiveWell has a particular fixation with global health and nutrition charities. It at least implicitly recommends that one should support charities only in those cause areas.”1
Short Answers
Empirically, more effective altruists are eager to invest into highly speculative, high risk–high return interventions than prefer the safe investments of proven and repeatable medical interventions.
This eagerness to accept financial and personal risks to effect superior positive impact has led to a flourishing culture of metacharities, prioritization research, medical research, political research and advocacy, research on global catastrophic risk, and much more within the movement.
GiveWell in particular has long been been eager to expand to less easy-to-study interventions, so that it has been investigating (under the name Open Philanthropy Project) a wide range of global catastrophic risks, political causes, and opportunities for research funding since before effective altruism had a name.
Long Answers
Where the last answer required a new name for a fancy fallacy, this one is simply called “straw man.”
As a pioneer in the field of charity prioritization, GiveWell had a herculean task ahead of it and very limited resources in terms of money, time, and research analysts. (This was years before effective altruism had consolidated into a movement.) Since funneling more donations to above-average charities is already better than the status quo, the team quickly learned that as a starting point they had to focus narrowly on cause areas marked by extreme suffering, interventions with solid track-records of cost-effective and scalable implementation, and charities with the transparency and commitment to self-evaluation that would enable GiveWell to assess them. As it happens, these combinations were mostly found in the cause areas of disease and poverty. This decision was one of necessity at the time, but soon later, GiveWell managed to scale up its operations significantly, so that these restrictions no longer applied.
Some of the best and most cost-effective giving opportunities may well lie in areas or involve interventions that are harder to study. Hence GiveWell has been investigating these under the brand of the Open Philanthropy Project (initially known as “GiveWell Labs”) since 2011. (That’s still before effective altruism had a name or called itself a movement.) Much scientific research promises great cost-effectiveness; so do some interventions to avert global catastrophic risks and to solve political problems. Doing the most good may well mean investing somewhere in one of these areas—where exactly, Open Phil has set out to find out.
In 2012 the effective altruism movement got its name and consequently consolidated its efforts at doing the most good. Nowhere in the agenda of the movement it said that effective interventions needed to be easy to study or quantify. In fact opinions and preferences on what “the most good” means in practice vary (yet some are shared by almost everyone). According to a 2014 survey, about 71% of the EAs in the survey sample were interested in supporting poverty-related causes, but almost 76% were interested in metacharity including rationality and prioritization research, which are not easy to quantify. About one third to fourth were interested in each of antispeciesism, environmentalism, global catastrophic risk, political causes, and (nonexistential) far future concerns, most of which are hard to study. There is by no means an unwarranted bias toward interventions that are easy to study; if anything, there’s a surprising tendency towards speculative, high risk–high return interventions.
Finally, GiveWell not only doesn’t “implicitly recommends that one should support charities only in [the cause areas of global health and nutrition]” but (1) recommends a charity outside these areas, (2) writes on every charity review that did not lead to a recommendations that the “write-up should not be taken as a ‘negative rating’ of the charity” (emphasis in original), and (3) gives reasons for why a philanthropist may legitimately choose not to donate to their recommended charities right on their Top Charities page.
Consequentialism and Utilitarianism
Misconception
Effective altruism depends on a utilitarian or consequentialist morality as implied in statements like, “not that [reducing corruption in the police force] could meet [effective altruism’s] utilitarian criteria for review.”2
Short Answers
What’s so bad about wanting to maximize happiness and minimize suffering in the world?
But there are also effective altruists with other moral systems, and effective altruism seems to follow from them as well.
Long Answers
Admittedly most effective altruists are utilitarians or consequentialists. If you want to maximize happiness and minimize suffering in the world (or maximize informed preference satisfaction), then it’s clear how effective altruism follows forcibly.
But how about deontology? Take Rawls (figures courtesy of UN, WHO, and World Bank):
More than one in ten people don’t have access to safe drinking water.
More than one in ten people suffer extreme hunger.
More than one in nine people live in slums.
Almost half the world’s population are at risk of malaria.
Almost half the world’s population lives on less than the buying power of $2.50 per day.
…
You have limited resources.
Imagine you’re in the original position behind the veil of ignorance and have to allocate our limited resources. Surely you’d make an admirable effective altruist.
This is beside the point, but a less corrupt police force will provide greater safety to the population and enjoy greater trust in return. The rich will no longer have recourse to bribing the police, so that poorer people are in a better position to trade and negotiate with them. The positive marginal impact on the happiness of the poor is likely to be greater than the marginal negative impact on the happiness of the rich. So there’s one of countless utilitarian cases for fighting corruption.
Top-Down and Elitist
Misconception
“The defective altruism distribution plan ‘requires a level of expertise that few individuals have.’ Thus, over time, we would require a very centralized and top-down approach to marshal and manage social investment and charitable giving decisions in a manner acceptable to the proponents of this approach.”1
Short Answers
That this is the case is a central grievance of the charity market, which effective altruism tries to remedy and without which the movement might not even be necessary.
Long Answers
It is one of the unfortunate truisms of the human condition that no market is perfect, but the charity market is particularly and abysmally imperfect. If someone wants to buy a solid state drive they might check, among other things, the price per gigabyte. $.96 per gigabyte? Rather expensive. $.38 per gigabyte? Wow, what a bargain! When people want to invest into a company, they check out the company’s earning over the past years, compare them to the stock price, and decide whether it’s a bargain or usury. Or if you have a headache, do you buy a homeopathic remedy that does nothing for $20 or Aspirin for $5?
I wasn’t there when it happened, but I imagine when the first effective altruists wanted to donate they called charities and were like “Hi, I like what you do and want to invest into your program. Could you give me your latest impact figures?” I imagine the responses ranged from “Our what?” over “You’re the first to ever ask for that” to “We have no idea.”
When the charities that run the programs don’t even know if they do anything good or anything at all in proportion to their cost, then how are donors supposed to find out? They would have to draw on the research of experts in the field and, to some extent, would have to become experts themselves.
Prioritization organizations want to change that. They flaunt a pot of money promised to the charities that make the best case for being highly effective. That way they incentivize transparency, self-evaluation, and optimization. Eventually, we hope, this will encourage a charity market that makes it much easier for everyone to recognize the charities with the most “bang for the buck.”
Ken Berger and Robert M. Penna, “The Elitist Philanthropy of So-Called Effective Altruism,” 2013, accessed 2015-03-23, http://www.ssireview.org/blog/entry/the_elitist_philanthropy_of_so_called_effective_altruism. ↩
Pascal-Emmanuel Gobry, “Can Effective Altruism Really Change the World?,” 2015, accessed 2015-03-23, http://theweek.com/articles/542955/effective-altruism-really-change-world. ↩