The marketing gap and a plea for moral inclusivity

In this post, I make three points. First, I note there seems to be a gap between what EA markets itself as being about (effective poverty reduction) and what many EAs really believe is important (poverty isn’t the top priority) and this marketing gap is potentially problematic. Second, I propose a two-part solution. One part is that EA outreach-y orgs should be upfront about what they think the most important problems are. The other is that EA outreach-y orgs, and, in fact, the EA movement as a whole, should embrace ‘morally inclusivity’: we should state what the most important problems are for a range on moral outlooks but not endorse a particular moral outlook. I anticipate some will think we should adopt ‘moral exclusivity’ instead, and just endorse or advocate the one view. My third point is a plea for moral inclusivity. I suggest even those who strongly consider one moral position to be true should still be in favour EA being morally inclusive as a morally inclusive movement is likely to generate better outcomes by the standards of everyone’s individual moral theory. Hence moral inclusivity is the dominant option.

Part 1

One thing that’s been bothering me for a while is the gap between how EA tends to market itself and what lots of EA really believe. I think the existence of this gap (or even if the perception of it) is probably a bad idea and also probably avoidable. I don’t think I’ve seen this discussed elsewhere, so I thought I’d bring it up here.

To explain, EA often markets itself as being about helping those in poverty (e.g. see GWWC’s website) and exhorts the general public to give their money to effective charities in that area. When people learn a bit more about EA, they discover that only some EAs believe poverty is the most important problem. They realise many EAs think we should really be focusing on the far future, and AI safety in particular, or on helping animals, or finding ways to improving the lives of presently existing humans that aren’t do to with alleviating poverty, and that’s where those EAs put their money and time.

There seem to be two possible explanations for the gap between EA marketing and EA reality. The first is historical. Many EAs were inspired by Singer’s Famine, Affluence and Morality which centres on saving a drowning child and preventing those in poverty dying from hunger. Poverty was the original focus. Now, on further reflection, many EAs have decided the far future is the important area but, given its anti-poverty genesis, the marketing/​rhetoric is still about poverty.

The second is that EAs believe, rightly or wrongly, talking about poverty is a more effective marketing strategy than talking about comparatively weird stuff like AI and animal suffering. People understand poverty and it’s easier to start with than before moving on to the other things.

I think the gap is problematic. If EA wants to be effective over the long run, one thing that’s important is that people see it as a movement of smart people with high integrity. I think it’s damaging to EA if there’s the perception, even if this perception is false, that effective altruists are the kind of people that say you should do one thing (give money to anti-poverty charities) but themselves believe in and do something else (e.g. AI safety is the most important).

I think this is bad for the outside perception of EA: we don’t want to give critics of the movement any more ammo than necessary. I think it potentially disrupts within-community cohesion too. Suppose person X joins EA because they were sold on the anti-poverty line by outreach officer Y. X then become heavily involved in the community and subsequently discovers Y really believes something different from what X was originally sold on. In this case, the new EA X would likely to distrust outreach officer Y, and maybe others in the community too.

Part 2

It seems clear to me this gap should go. But what should we do instead? I suggest a solution in two parts.

First, EA marketing should tally with the sort of things EAs believe are important. If we really think animals, AI, etc. are what matters, we should lead with those, rather than suggesting EA is about poverty and then mentioning other cause areas.

This doesn’t quite settle the matter. Should the marketing represent what current EAs believe is important? This is problematically circular: it’s not clear how to identify who counts as an ‘EA’ except by what they believe. In light of that, maybe the marketing should just represent what the heads or members of EA organisations believe is important. This is also problematic: what if EAs orgs’ beliefs substantially differ from the rest of the EA community (however that’s construed)?

Here, we seem to face a choice between what I’ll call ‘moral inclusivism’, stating what the most important problems are for a range on moral outlooks but not endorsing a particular moral outlook, and ‘moral exclusivism’, picking a single moral view and endorsing that.

With this choice in mind, I suggest inclusivism. I’ll explain how I thing this works in this section and defend it in the final one.

Roughly, I think the EA pitch should be “EA is about doing more good, whatever your views”. If that seems too concessive, it could be welfarist – “we care making things better or worse for humans and animals” – but neutral on makes things better or worse—“we don’t all think happiness is the only thing matters”—and neutral on population ethics – “EAs disagree about how much the future matters. Some focus on helping current people, others are worried about the survival of humanity, but we work together wherever we can. Personally, I think cause X is the most important because I believe theory Y...”.

I don’t think all EA organisations need to be inclusive. What the Future of Humanity Institute works on is clearly stated in its name and it would be weird if it started claim the future of humanity was unimportant. I don’t think individuals EAs need to pretend endorse multiple view eithers. But I think the central, outreach-y ones should adopt inclusivism.

The advantage of this sort of approach is it allows EA to be entirely straightforward about what effective altruists stand for and avoids even the perception of saying one thing and doing another. Caesar’s wife should be above suspicion, and all that.

An immediate objection is that this sort of approach—front-loading all the ‘weirdness’ of EA views when we do outreach—would be off-putting. I think this worry, so much as it does actually exist, is overblown and also avoidable. Here’s how I think the EA pitch goes:

-Talk about the drowning child story and/​or the comparative wealth of those in the developed world.

-Talk about ineffective and effective charities.

-Say that many people became EAs because they were persuaded of the idea we should help others when it’s only a trivial cost to ourselves.

-Point out people understand this in different ways because of their philosophical beliefs about what matters: some focus on helping humans alive today, others on animals, others on trying to make sure humanity doesn’t accidentally wipe itself out, etc.

-For those worried about how to ‘sell’ AI in particular, I recently heard Peter Singer give a talk when he said something like (can’t remember exactly): “some people are very worried about about the risks from artificial intelligence. As Nick Bostrom, a philosopher at the University of Oxford pointed out to me, it’s probably not a very good idea, from an evolutionary point of view, to build something smarter than ourselves.” At which point the audience chuckled. I thought it was a nice, very disarming way to make the point.

In conclusion, think the apparent gap between rhetoric and reality is problematic and also avoidable. Organisations like GWWC should make it clearer that EAs support causes other than global poverty.

Part 3

One might think EA organisations, faced with the inclusivist-exclusivist dilemma, should opt for the latter. You might think most EAs, at least within certain organisations, do agree one a single moral theory, so endorsing morally inclusivity would be dishonest. Instead, you could conclude we should be moral exclusivists, fly the flag for our favourite moral theory, lead with it and not try to accommodate everyone.

From my outsider’s perspective, I think this is sort of direction 80,000 Hours has started to move in more recently. They are now much more open and straightforward about saying the far future in general, and AI safety in particular, is what really matters. Their cause selection choices, which I think they updated a few months ago only really make sense if you adopt total utilitarianism (maximise happiness throughout history of the universe) rather than if you prefer a person-affecting view in population ethics (make people happy, don’t worry about creating happy people) or you just want to focus on the near future (maybe due to uncertainty about what we can do or pure time discounting).

An obvious worry about being a moral exclusivist and picking one moral theory is that you might be wrong; if you’re endorsing the wrong view, that’s really going to set back your ability to do good. But given you have to take some choices, let’s putthis worry aside. I’m now going to make a plea for making/​keeping EA morally inclusive whatever your preferred moral views are. I offer three reasons.

1.

Inclusivity reduces group think. If EA is known as a movement where people believe view X, people who don’t like view X will exit the movement (typically without saying anything). This deprives those who remain of really useful criticism that would help identify intellectual blind spots and force the remainers to keep improving their thinking. This also creates a false sense of confidence in the remainers because all their peers now agree with them.

Another part of this is that, if you want people to seek the truth, you shouldn’t give them incentives to be yes-humans. There are lots of people that like EA and want to work in EA orgs and be liked by other (influential) EAs. If people think they will be rewarded (e.g. with jobs) for adopting the ‘right’ views and signalling them to others, they will probably slide towards what they think people want to hear, rather than what they think is correct. Responding to incentives is a natural human thing to do, and I very much doubt EAs are immune to it. Similar to what I said in part 1, even a perception there are ‘right’ answers can be damaging to truth seeking. Like a good university seminar leader, EA should create an environment where people feel inspired to seek the truth, rather than just agree with the received wisdom, as honest truth seeking and disagreement seems mostly likely to reveal the truth.

2.

Inclusivity increases movement size. If we only appeal to a section of the ‘moral market’ then there won’t be so many people in the EA world. Even if people have different views, they can still can work together, engage in moral trade, personally support each other, share ideas, etc.

I think organisations working on particular, object-level problems, need to be value-aligned to aid co-ordination (if I want to stop global warming and you don’t care, you shouldn’t join my global warming org) but this doesn’t seem relevant at the level of a community. Where people meet in at EA hubs, EA conferences, etc. they’re not working together anyway. Hence this isn’t an objection for EA outreach-y orgs being morally inclusive.

3.

Inclusivity minimises in-fighting. If people perceive there’s only one accepted and acceptable view, then people will spend their time fighting the battle of hearts and minds to ensure that their view wins, and this will do this rather than working on solving real world problems themselves. Or they’ll split, stop talking to each other and fail to co-ordinate. Witness, for instance, the endless schisms within churches about doctrinal matters, like gay marriage, and the seemingly limited interest they have in helping other people. If people instead believe there’s a broad range of views within a community, this is okay, and there’s no point fighting for ideological supremacy, they can instead engage in dialogue, get along and help each other. More generally, I think I’d rather be in a community where people thought different things and this was accepted, rather than one where there were no disagreements and none allowed.

On the basis of these three reasons, I don’t think even those who believe they’ve found the moral truth should want EA as a whole to be morally exclusive. Moral inclusivity seems to increase ability of effective altruists to collectively seek the truth and work together, which looks like it leads to more good being done from the perspective of each moral theory.

What followed from parts 1 and 2 is that, for instance, GWWC should close the marketing gap and be more upfront about what EAs really believe. People should not feel surprised about what EAs value when they get more involved in the movement.

What follows from part 3 is that, for instance, 80,000 Hours should be much morally inclusive than they presently are. Instead of “these are the most important things”, it should “these are the most important things if you believe A, but not everyone believes A. If you believe B, you should think these are the important things [new list pops up]. As an organisation, we don’t take a stand on A or B, but here are some arguments you might find relevant to help you decide”.

Here end my pleas for moral inclusivity.

There may be arguments for keeping the marketing gap and adopting moral exclusivism I’ve not considered and I’d welcome discussion.

Edit (10/​07/​2017): Ben Todd points out in the comment below that 1) 80k have stated their preferred view since 2014 in order to be transparent and that 2) they provide a decision tool for those who disagree with 80k’s preferred view. I’m pleased to learn the former and admit my mistake. On the latter, Ben and I seem to disagree whether adding the decision tool makes 80k morally inclusive or not (I don’t think it does).