The big problem with how we do outreach

A familiar pattern: EA organizations promote charities that help people in the developing world. A critic accuses EA of forcing people to be rationality robots. EA defends the use of rationality in altruistic decisions. But both sides miss the point: it demonstrates at best a lack of imagination and at worst coldheartedness to think that only a rationality robot would believe that African lives matter. I’m guilty of this too: it reveals my own prejudices when I think about helping people in the developing world (or livestock) as “giving from the head”, rather than “giving from the heart”. Promoting EA will require changing values, not just making people more rational.

People are not malfunctioning utilitarian robots

Frequently, EA outreach starts from the implicit assumption that, deep down, people value all lives equally. In this narrative, the reason that people don’t give to GiveWell-recommended charities is Kahneman-style irrationality. For example, supposedly people have biases such as scope neglect that prevent them from implementing their consequentialist values.

A typical EA example is the comparison between paying for a guide dog to help a blind person in the developed world versus curing many people of blindness in the developing world. To a utilitarian, choosing the former could only result from irrationality. But it’s plausible that most people aren’t utilitarians and don’t care very much about people in the developing world. Even in surveys of philosophers, who would be expected to be more utilitarian than the general population, only a quarter are purely consequentialist.

Rationality alone probably won’t lead to EA

Some people might argue that non-utilitarians will become utilitarian if they become more rational. This argument relies implicitly on a belief in moral convergence, which is difficult to defend if one rejects moral realism, as many EAs do. These are very complex debates, which I’ll discuss more in a followup post, but the idea that EAs can be created through rationality training alone should be viewed with skepticism. (This is another reason I’m skeptical of the ability of CFAR and similar organizations to have a positive effect outside of some very specific populations.)

A comes before E

In short, people can’t optimize for values that they don’t have. For the majority of ordinary people, who don’t share the egalitarian, utilitarian-ish values of EA, “the most good you can do” is meaningless. This means that we need to start by spreading our values, before talking about implementation. Though rationality exercises won’t be useful for this, countless social movements have proven that it is possible to change people’s values, typically by combining various types of emotional appeals. Research into the causes of changes in values will be extremely important for the future of EA.

Expanding the circle of compassion

Instead of “the most good you can do”, a better message for some audiences may be “expanding the circle of compassion”. The idea that human culture has become more enlightened by being compassionate to those different from ourselves is catchy, emotionally appealing, and tends to approximate utilitarianism in practice. It may be particularly suited to some audiences, such as religious organizations.

During the holiday season, it’s nice to return to the compassionate roots of effective altruism. As Julia Wise says in this excellent post, there’s no shame in giving from the heart.