Iason Gabriel writes: What’s Wrong with Effective Altruism

We’re always on the lookout for well-written critiques of effective altruism, and here’s one that turned up recently.

It’s a paper by Iason Gabriel called What’s Wrong with Effective Altruism?

I’ll try and summarize its main points in my own words, but I’d urge you to read the original.

Related:

The paper is not EA-bashing. It describes a model of what “effective altruism” actually means, looks at common objections to it and attempts to weigh each one in turn.

Description of effective altruism and background

Effective altruism encourages individuals to make altruism a central part of their lives, and combines this with a more specific commitment to do as much expected good as possible, typically by contributing money to the best-performing aid and development organizations. Effective altruists are also committed to the idea that scientific analysis and careful reasoning can help us identify which course of action is best.

The paper also praises the EA movement for important successes: the establishment of new meta-charities, creating an incentive to demonstrate effectiveness, and drawing attention to the message that individuals in high-income countries have the power to do “an incredible amount of good”.

Gabriel also makes an interesting claim about the dynamics of private donations vs. government aid, and how they can be influenced:

the distortions that affect private giving are both more serious and less deeply entrenched than those that affect the distribution of aid. These distortions are more serious because only a tiny percentage of the money donated by individuals makes its way to the world’s poorest people, where it would often do the most good. They are less entrenched because they often result from a lack of information, or carelessness, rather than from the pursuit of competing geopolitical aims. Taken together, these considerations suggest that there is an important opportunity for moral leverage

Gabriel points out that EA has met with “considerable resistance among aid practitioners and activists” and that

I believe that it can be explained both by the competitive dynamics that exist within the philanthropic sector and also by deeper disagreements about value.

Thick and thin versions of effective altruism

The thin version of the doctrine holds that ‘we should do the most good we can’ [...] The thick version of effective altruism makes a number of further assumptions.

These further assumptions can be summarized as:

  • Welfarism: “Good states of affairs are those in which suffering is reduced and premature loss of life averted.”

  • Consequentialism

  • Scientific approach: “It is possible to provide sound general advice about how individual people can do the most good”

The paper focusses on the thick version. It also (for reasons of space) leaves non-human animal issues aside.

I’m leaving most of my own remarks to the comments section, but I’ll just point out here that the “thick” and “thin” versions of EA described by Gabriel don’t exactly correspond to the “core idea” and “associated ideas” described by Wiblin in his response.

Is effective altruism unjust?

Equality: the paper claims that while people in the EA movement recognize that equality is instrumentally important, most do not believe equality has any intrinsic value. This is illustrated with the two villages thought experiment:

There are two villages, each in a different country. Both stand in need of assistance but they are unaware of each other and never interact. As a donor, you must choose between financing one of two programs. The first program allocates an equal amount of money to projects in each community and achieves substantial overall benefit. The second program allocates all of the money to one village and none to the other. By concentrating resources it achieves a marginally greater gain in overall welfare than the first project

The claim in this example is that EAs prefer the second program, while people with “an intuitive commitment to fairness” prefer the first.

Gabriel lists three possible responses EA could give to this criticism:

  • Bite the bullet, and insist that equality has no independent weight (and that the second program in the example is better)

  • Modify our utility functions to include a term for equality (although Gabriel doesn’t quite put it in those words)

  • Use equality as a tie-breaker

Priority: the paper makes the empirical claim that the very poorest people in the world are often particularly hard to help. It claims that while EA would tend to ignore such cases, many people believe they should be prioritized—either because meeting urgent claims first is a basic component of morality, or because of some need to justify state power.

Again there is a thought experiment:

Ultrapoverty. There are a large number of people living in extreme poverty. Within this group, some people are worse-off than others. As a donor, you must choose between financing one of two development interventions. The first program focuses on those who will benefit the most. It targets literate men in urban areas, and has considerable success in sustainably lifting them out of poverty. The second program focuses on those who are most in need of assistance. It works primarily with illiterate widows and disabled people who live in rural areas. It also has some success in lifting these people out of poverty but is less successful at raising overall welfare.

Gabriel points out that EA logic leads to supporting the literate men in this example, and that “when this pattern of reasoning is iterated many times over, it leads to the systematic neglect of those at the very bottom—something that strikes many people as unjust.”

The possible responses listed are:

  • Bite the bullet

  • “endorse a defeasible priority principle that would give the claims of the worst-off some priority over the claims of the better-off, even if a higher total utility could be achieved by focusing on the latter group of people” (I don’t quite understand what this means)

  • Use priority as a tie-breaker

  • (Regarding the state power argument) apply different moral principles to political institutions vs. private donors

Rights:the paper defines a right as “a justified claim to some form of treatment by a person or institution that resists simple aggregation in moral reasoning”. This is illustrated with the following example:

Sweatshop. The country you are working in has seen a rapid expansion of dangerous and poorly regulated factory work in recent years. This trend has helped lift a large number of people out of poverty but has also led to an increase in workplace fatalities. As a donor, you are approached by a group of NGOs who want to campaign for better working conditions. There is reason to believe that they can persuade the government to introduce new legislation if they have your financial backing. These laws would regulate the industry but reduce the number of opportunities for employment in the country as a whole.

Gabriel claims that most EAs would refuse to support this campaign, and cites William MacAskill as saying there is “no question that [sweatshops] are good for the poor”. Gabriel argues that EAs could modify their theory to include independent weight to rights, but lists different possible outcomes for what this would mean for the Sweatshop case.

Is effective altruism blind?

Here Gabriel addresses the question of how the scientific method is put into practice within the effective altruism movement, and whether this introduces various forms of systematic bias. He starts off by describing how GiveWell and Giving What We Can evaluate their charities:

  • assessing the scale of a problem

  • looking for proven interventions to help with that problem

  • looking for neglectedness

  • audit of individual organizations

And then comments:

all but one of the ‘top charities’ endorsed by GiveWell and GWWC focus on neglected tropical diseases (NTDs). They also make the movement vulnerable to the charge that it suffers from a form of methodological blindness.

Materialism (1): the claim that EA overvalues hard evidence such as RCTs

Materialism (2): the claim that EA in practice relies too much on metrics such as the DALY that ignore factors such as autonomy or self-actualization, that we care about in principle. He says that EA has been improving in this area but that more still needs to be done.

Individualism: the claim that EA undervalues collective goods such as community empowerment and hope. This is illustrated with another thought experiment, the claim being that allocating 10% to ARVs leads to more hope.

Medicine. According to recent estimates condom distribution is a far more effective way of minimizing the harm caused by HIV/​AIDS than the provision of anti-retrovirals (AVR). Whereas ARVs help people who already have the virus, condoms help to prevent many more people from becoming infected. As a donor, you must choose between funding one of two National Action Plans. The first program allocates the entire sum of money to condom distribution. The second program allocates 90% to condom distribution and 10% to ARVs.

Instrumentalism: this point seems somewhat abstract and is again maybe best illustrated with Gabriel’s example:

Participation. There are a group of villages that require help developing their water and sanitation system in order to tackle the problem of waterborne parasites. As a donor you face a choice between funding one of two projects. The first project will hire national contractors to build the water and sanitation system, something that they have done successfully in the past. The second project works with members of the community to develop and build new facilities. This approach has also worked in the past, but because villagers lack expertise their systems tend to be less functional than the ones built by experts.

Gabriel’s claim is that the community-based solution has better knock-on effects, such as greater autonomy and self-esteem, and valuing the community-built system more and hence maintaining it better. A simple cost-effectiveness estimate would overlook these.

Is effective altruism effective?

Effective altruists aim to make choices and life-decisions that do the greatest amount of overall good. This section asks how robust the advice they provide actually is.

Counterfactuals: in possibly the most interesting claim in the paper, Gabriel asks us “if individual people who are affiliated with the movement stopped giving a portion of their income to the top charities: would larger philanthropic organizations simply step in and fill the gap?”

Gabriel raises the obvious question of why large donors such as the Gates foundation haven’t already fully funded charities such as AMF and SCI. He gives three suggestions:

  • The large donors aren’t fully on board with the effectiveness thing

  • Large donors may not feel that GiveWell and GWWC are doing their prioritization research correctly

  • The EA movement is cute and needs to be supported in its growth by giving it some nice charities to play with (not Gabriel’s exact words)

In support of the third claim, Gabriel points out that GiveWell is able to fully fund its top charities via Good Ventures, but has chosen not to do so. If true, it’s fairly obviously problematic for us.

Motivation and rationality: the claim is that to make it big, EA needs to understand psychology, and that we don’t. In particular:

  • Gabriel refers to David Brooks as saying that an earning-to-give career might not be psychologically sustainable for most people—even if it is for a few EAs.

  • EA appeals too much to reason and not enough to emotion.

Systemic change: the claim here is that EA could stand to learn from historical movements such as abolitionism and civil rights—in particular putting a greater focus on justice, and less on cost-effectiveness or on the feel-good factor of giving.

Read the full paper here. I feel my own thoughts on this belong in the comments section so I’ll add them there.

Also note that I’m not Iason Gabriel.