Iason Gabriel writes: What’s Wrong with Effective Altruism
We’re always on the lookout for well-written critiques of effective altruism, and here’s one that turned up recently.
It’s a paper by Iason Gabriel called What’s Wrong with Effective Altruism?
I’ll try and summarize its main points in my own words, but I’d urge you to read the original.
Related:
Rob Wiblin responds (although not in detail) on the 80K blog
Stefan Shubert writes on this forum defending the “thin” version of EA against triviality objections
The paper is not EA-bashing. It describes a model of what “effective altruism” actually means, looks at common objections to it and attempts to weigh each one in turn.
Description of effective altruism and background
Effective altruism encourages individuals to make altruism a central part of their lives, and combines this with a more specific commitment to do as much expected good as possible, typically by contributing money to the best-performing aid and development organizations. Effective altruists are also committed to the idea that scientific analysis and careful reasoning can help us identify which course of action is best.
The paper also praises the EA movement for important successes: the establishment of new meta-charities, creating an incentive to demonstrate effectiveness, and drawing attention to the message that individuals in high-income countries have the power to do “an incredible amount of good”.
Gabriel also makes an interesting claim about the dynamics of private donations vs. government aid, and how they can be influenced:
the distortions that affect private giving are both more serious and less deeply entrenched than those that affect the distribution of aid. These distortions are more serious because only a tiny percentage of the money donated by individuals makes its way to the world’s poorest people, where it would often do the most good. They are less entrenched because they often result from a lack of information, or carelessness, rather than from the pursuit of competing geopolitical aims. Taken together, these considerations suggest that there is an important opportunity for moral leverage
Gabriel points out that EA has met with “considerable resistance among aid practitioners and activists” and that
I believe that it can be explained both by the competitive dynamics that exist within the philanthropic sector and also by deeper disagreements about value.
Thick and thin versions of effective altruism
The thin version of the doctrine holds that ‘we should do the most good we can’ [...] The thick version of effective altruism makes a number of further assumptions.
These further assumptions can be summarized as:
Welfarism: “Good states of affairs are those in which suffering is reduced and premature loss of life averted.”
Consequentialism
Scientific approach: “It is possible to provide sound general advice about how individual people can do the most good”
The paper focusses on the thick version. It also (for reasons of space) leaves non-human animal issues aside.
I’m leaving most of my own remarks to the comments section, but I’ll just point out here that the “thick” and “thin” versions of EA described by Gabriel don’t exactly correspond to the “core idea” and “associated ideas” described by Wiblin in his response.
Is effective altruism unjust?
Equality: the paper claims that while people in the EA movement recognize that equality is instrumentally important, most do not believe equality has any intrinsic value. This is illustrated with the two villages thought experiment:
There are two villages, each in a different country. Both stand in need of assistance but they are unaware of each other and never interact. As a donor, you must choose between financing one of two programs. The first program allocates an equal amount of money to projects in each community and achieves substantial overall benefit. The second program allocates all of the money to one village and none to the other. By concentrating resources it achieves a marginally greater gain in overall welfare than the first project
The claim in this example is that EAs prefer the second program, while people with “an intuitive commitment to fairness” prefer the first.
Gabriel lists three possible responses EA could give to this criticism:
Bite the bullet, and insist that equality has no independent weight (and that the second program in the example is better)
Modify our utility functions to include a term for equality (although Gabriel doesn’t quite put it in those words)
Use equality as a tie-breaker
Priority: the paper makes the empirical claim that the very poorest people in the world are often particularly hard to help. It claims that while EA would tend to ignore such cases, many people believe they should be prioritized—either because meeting urgent claims first is a basic component of morality, or because of some need to justify state power.
Again there is a thought experiment:
Ultrapoverty. There are a large number of people living in extreme poverty. Within this group, some people are worse-off than others. As a donor, you must choose between financing one of two development interventions. The first program focuses on those who will benefit the most. It targets literate men in urban areas, and has considerable success in sustainably lifting them out of poverty. The second program focuses on those who are most in need of assistance. It works primarily with illiterate widows and disabled people who live in rural areas. It also has some success in lifting these people out of poverty but is less successful at raising overall welfare.
Gabriel points out that EA logic leads to supporting the literate men in this example, and that “when this pattern of reasoning is iterated many times over, it leads to the systematic neglect of those at the very bottom—something that strikes many people as unjust.”
The possible responses listed are:
Bite the bullet
“endorse a defeasible priority principle that would give the claims of the worst-off some priority over the claims of the better-off, even if a higher total utility could be achieved by focusing on the latter group of people” (I don’t quite understand what this means)
Use priority as a tie-breaker
(Regarding the state power argument) apply different moral principles to political institutions vs. private donors
Rights:the paper defines a right as “a justified claim to some form of treatment by a person or institution that resists simple aggregation in moral reasoning”. This is illustrated with the following example:
Sweatshop. The country you are working in has seen a rapid expansion of dangerous and poorly regulated factory work in recent years. This trend has helped lift a large number of people out of poverty but has also led to an increase in workplace fatalities. As a donor, you are approached by a group of NGOs who want to campaign for better working conditions. There is reason to believe that they can persuade the government to introduce new legislation if they have your financial backing. These laws would regulate the industry but reduce the number of opportunities for employment in the country as a whole.
Gabriel claims that most EAs would refuse to support this campaign, and cites William MacAskill as saying there is “no question that [sweatshops] are good for the poor”. Gabriel argues that EAs could modify their theory to include independent weight to rights, but lists different possible outcomes for what this would mean for the Sweatshop case.
Is effective altruism blind?
Here Gabriel addresses the question of how the scientific method is put into practice within the effective altruism movement, and whether this introduces various forms of systematic bias. He starts off by describing how GiveWell and Giving What We Can evaluate their charities:
assessing the scale of a problem
looking for proven interventions to help with that problem
looking for neglectedness
audit of individual organizations
And then comments:
all but one of the ‘top charities’ endorsed by GiveWell and GWWC focus on neglected tropical diseases (NTDs). They also make the movement vulnerable to the charge that it suffers from a form of methodological blindness.
Materialism (1): the claim that EA overvalues hard evidence such as RCTs
Materialism (2): the claim that EA in practice relies too much on metrics such as the DALY that ignore factors such as autonomy or self-actualization, that we care about in principle. He says that EA has been improving in this area but that more still needs to be done.
Individualism: the claim that EA undervalues collective goods such as community empowerment and hope. This is illustrated with another thought experiment, the claim being that allocating 10% to ARVs leads to more hope.
Medicine. According to recent estimates condom distribution is a far more effective way of minimizing the harm caused by HIV/AIDS than the provision of anti-retrovirals (AVR). Whereas ARVs help people who already have the virus, condoms help to prevent many more people from becoming infected. As a donor, you must choose between funding one of two National Action Plans. The first program allocates the entire sum of money to condom distribution. The second program allocates 90% to condom distribution and 10% to ARVs.
Instrumentalism: this point seems somewhat abstract and is again maybe best illustrated with Gabriel’s example:
Participation. There are a group of villages that require help developing their water and sanitation system in order to tackle the problem of waterborne parasites. As a donor you face a choice between funding one of two projects. The first project will hire national contractors to build the water and sanitation system, something that they have done successfully in the past. The second project works with members of the community to develop and build new facilities. This approach has also worked in the past, but because villagers lack expertise their systems tend to be less functional than the ones built by experts.
Gabriel’s claim is that the community-based solution has better knock-on effects, such as greater autonomy and self-esteem, and valuing the community-built system more and hence maintaining it better. A simple cost-effectiveness estimate would overlook these.
Is effective altruism effective?
Effective altruists aim to make choices and life-decisions that do the greatest amount of overall good. This section asks how robust the advice they provide actually is.
Counterfactuals: in possibly the most interesting claim in the paper, Gabriel asks us “if individual people who are affiliated with the movement stopped giving a portion of their income to the top charities: would larger philanthropic organizations simply step in and fill the gap?”
Gabriel raises the obvious question of why large donors such as the Gates foundation haven’t already fully funded charities such as AMF and SCI. He gives three suggestions:
The large donors aren’t fully on board with the effectiveness thing
Large donors may not feel that GiveWell and GWWC are doing their prioritization research correctly
The EA movement is cute and needs to be supported in its growth by giving it some nice charities to play with (not Gabriel’s exact words)
In support of the third claim, Gabriel points out that GiveWell is able to fully fund its top charities via Good Ventures, but has chosen not to do so. If true, it’s fairly obviously problematic for us.
Motivation and rationality: the claim is that to make it big, EA needs to understand psychology, and that we don’t. In particular:
Gabriel refers to David Brooks as saying that an earning-to-give career might not be psychologically sustainable for most people—even if it is for a few EAs.
EA appeals too much to reason and not enough to emotion.
Systemic change: the claim here is that EA could stand to learn from historical movements such as abolitionism and civil rights—in particular putting a greater focus on justice, and less on cost-effectiveness or on the feel-good factor of giving.
Read the full paper here. I feel my own thoughts on this belong in the comments section so I’ll add them there.
Also note that I’m not Iason Gabriel.
If prioritarianism demands focusing on helping fewer people living on <$1 a day rather than many people living on $1.25 per day, then virtually all rich country welfare state spending and domestic charity fails. Does Gabriel accept that?
The standard arguments against substantial prioritarianism (basically a lot more suffering and death) seem pretty good, and EA in fact helps people far worse-off than those prioritarians/egalitarians often focus on (relatively locally poor in rich countries who are rich by global standards).
This seems pretty explicitly something that Good Ventures is doing. They are providing some funding to GW’s picks for demonstration and movement-building purposes, but leaving lots of room for funding to bolster EA movement growth, while working on more effective giving options with OPP.
And I agree that the expected value for things like OPP picks on foreign aid meta-research, or humanitarian immigration advocacy, will likely be substantially better than AMF.
However, that means that funging with Good Ventures is pretty good, since it will leave them with more dollars for OPP. If you are substituting for big foundation X then your marginal impact is the marginal impact of a dollar in the hands of foundation X. Still, this is why I often favor donations to small startup projects (e.g. Charity Science) and such where transaction costs or other barriers prevent the involvement of large funders.
There’s a good amount of evidence for this, if it means choosing a career one otherwise dislikes for higher earnings.
A big portion of the case for GW’s charities is precisely community spillover effects in community-scale RCTs!
I think that the priority/ultrapoverty strand of this argument is one place where you can’t ignore nonhuman animals. My intuition says that they’re among the worst off, and relatively cheap to help.
These criticisms are neither new nor particularly compelling to me.
The argument about justice has no real force unless Gabriel wants to unpack it on an actual intervention. While it’s possible to concoct thought experiments in which Gabriel’s notion of a thick effective altruist makes a decision that Gabriel doesn’t like, IMO this rarely comes up in practice and there are no real-world examples in the paper. So, not very exciting. There’s already a whole cottage industry of thought experiments in which utilitarians do silly things.
Also, the claim that EA systematically neglects the worst-off is ridiculous on its face; many EAs explicitly have a heuristic of trying to help the worst-off and it seems that most other attempts at improving the world fare vastly worse than EAs here.
WRT the charges of “materialism”, “individualism” and “instrumentalism”, again, people have been making these for a while and I still don’t find them compelling. First, it seems that Gabriel has no idea what Open Phil (or other non-global-poverty focused EAers) are up to, as the description of “EA methodology” really only applies to a pretty narrow segment of organizations.
Second, I think it’s pretty clear that GW are aware of the limits of RCT evidence and try to think through the consequences of their interventions that might not get picked up by them. If Gabriel wants to argue that their “blindness” here has caused them to make actual bad decisions, then I’d be interested in hearing that argument—but claiming that “something is wrong with EA because I think that GiveWell would make this obviously bad call in a contrived thought experiment” is a long way from such an argument. Again, claims of “more needs to be done about this” without specific criticisms of actual decisions (instead of criticisms of what Gabriel imagines GW would do) lack much force to me.
I just want to pick up quickly on something mentioned in footnote 6. Iason writes “According to a recent survey of effective altruists 69 percent were consequentialists, 2% were deontologists, and 5% were virtue ethicists” (and 20% were “Other”), but it’s worth emphasising that we can’t take our survey to represent EA tout court, just our sample). That said, I think the claim in the main text that EA is “broadly consequentialist” in its “thick” mode, is one few would disagree with.
I’m a little surprised by some of the other claims about what EAs are like, such as (quoting Singer): “they tend to view values like justice, freedom, equality, and knowledge not as good in themselves but good because of the positive effect they have on social welfare.”
It may be true, but if so I need to do some updating. My own take is that those things are all inherently valuable, but (leaving aside far future and xrisk stuff), welfare is a better buy. I can’t necessarily assume many people in EA agree with me though.
There’s also some confusion in the language between what people in EA do, and what their representatives in GW and GWWC do. I’m thinking of:
Interesting. My view is that EAs do tend to view these things as valuable only insofar as they serve wellbeing, at least in their explicit theorising and decision-making. That’s my person view anyway. I’d add the caveat that I think most people actually judge according to a more deontological folk morality implicitly when making moral judgements (i.e. we actually do think that fairness, and our beliefs being right, are important).
I think this varies a bit by cause area though. For example (and this is not necessarily a criticism) the animal rights (clue’s in the name) section seems much more deontological.
Some of those things I would just define in utilitarian terms. I would view justice as ‘the social arrangement that maximizes utility’, and the form of equality I value most highly is equal consideration of interests (of course, I value other forms of equality instrumentally).
As an animal rights EA involved in one of the more explicitly deontological organizations (DxE), I have to say there are more consequentialists than you’d think. I’m a consequentialist, for instance, but think rights and the like often have high instrumental value.
One thing I find interesting about all the thought experiments is that they assume a one donor, many recipient model. That is, the morality of each situation is analyzed as if a single agent is making the decision.
Reality is many donors, many recipients and I think this affects the analysis of the examples. Firstly because donors influence each others’ behaviour, and secondly because moral goods may aggregate on the donor end even if they don’t aggregate on the recipient end. I’ll try and explain with some examples:
Two villages (a): each village currently receives 50% of the donations from other donors. Enough of the other donors care about equality that this number will stay at 50% whichever one you donate to (because they’ll donate to whichever village receives less than 50% of the funds). So whether you care about equality or not, as a single donor your decision doesn’t matter either way.
Two villages (b): each village currently receives 50% of the donations from other donors, but this time it’s because the other donors are donating carelessly. Moral philosophers have decided that the correct allocation (balancing equality with overall benefit) is for one village to receive 60% of donations and the other to receive 40%. As a relatively small donor, your moral duty then is to give all your money to one village, to try and nudge that number up as close to 60% as you can.
Medicine (a): Philosophers have decided the ideal distribution is 90% condoms and 10% ARVs. Depending what the actual distribution is, it might be best to put all your money into funding condoms, or all your money into funding ARVs, and only if it’s already right on the mark should you favour a 90⁄10 split.
I don’t think the Ultrapoverty, Sweatshop and Participation examples are affected by this particular way of thinking though.
I just get the feeling that something like consequentialism will emerge, even if you start off with very different premises, once you take into account other donors giving to overlapping causes but with different agendas. Or at least, that this would be so for as long as people identifying with EA remain a tiny minority.
re: the multinational tax dodging:
http://www.cgdev.org/blog/how-much-do-we-really-know-about-multinational-tax-avoidance-and-how-much-it-really-worth?utm_source=150721&utm_medium=cgd_email&utm_campaign=cgd_weekly&utm_
http://www.cgdev.org/blog/talking-about-tax-taxing-pretending-it-simple-will-hurt-poor
http://www.ictd.ac/en/corporate-tax-avoidance-and-development-opening-pandora%E2%80%99s-box
re: his criticism of a small funding gap:
http://effective-altruism.com/ea/ji/room_for_more_funding_why_doesnt_the_gates/
There’s another response that EAs could have to the priority/ultrapoverty strand, which is to bend their utility functions so that ultrapoverty is rated as even more bad, and improvements at the ultrapoverty end would be calculated as more important. Of course, however concave the utility function is, you can still construct a scenario where the people at the ultrapoverty end would be ignored.
My first thought on reading the “Two villages” thought experiment was that the village that was easier to help would be poorer, because of the decreasing marginal value of money. If this was so, you’d want to give all your money to the poorer one if your goal was to reduce “the influence of morally arbitrary factors on people’s lives”.
On the other hand that gets reversed if the poorer village is the one that’s harder to help. In that case fairness arguments would still seem to favour putting all your money in one village, just the opposite one to what consequentialists would favour. (So that this problem can’t be completely separated from the Ultrapoverty one).