EA’s Image Problem

[Thanks to Max Dal­ton and Harry Peto for ex­ten­sive com­ments, cor­rec­tions and ad­di­tions.]

I am a com­mit­ted EA and am thrilled that the move­ment ex­ists. Hav­ing spo­ken to many non-EAs, how­ever, I’m con­vinced that the move­ment has an image prob­lem: out­siders as­so­ci­ate it and its mem­bers with nega­tive things. This is partly un­avoid­able: the move­ment does and should challenge what are strongly held and deeply per­sonal be­liefs. How­ever, I think we as a move­ment can do more to over­come the problem

I first list some crit­i­cisms of the move­ment I’ve heard. Then I dis­cuss, one by one, four key causes of these crit­i­cisms and sug­gest things we can do to com­bat them.

I think some of the key causes give peo­ple le­gi­t­i­mate rea­sons to crit­i­cise the move­ment and oth­ers don’t. How­ever, all crit­i­cisms dam­age EA’s image and should be avoided if pos­si­ble.

Crit­i­cisms of EA

· Smug and Ar­ro­gant – EAs have a strong con­vic­tion that they’re right and that they’re great peo­ple; similarly that their move­ment is cor­rect and more im­por­tant than all others

· Cold Hearted – EAs aren’t suffi­ciently empathetic

· Rich-Per­son Mo­ral­ity – EA pro­vides a way for rich, pow­er­ful and priv­ileged peo­ple to feel like they’re moral people

· Priv­ileged Ac­cess – EA is only ac­cessible to those with a large amount of the ed­u­ca­tional and so­cio-eco­nomic privilege

The lat­ter two are par­tic­u­larly wor­ri­some for those who, like me, want EA to change the at­ti­tudes of so­ciety as a whole. The move­ment will strug­gle to do this while it’s per­ceived as be­ing elitist. More wor­ry­ingly, I think the lat­ter two are le­gi­t­i­mate con­cerns to have about EA.

Key Cause 1- Ob­scur­ing the dis­tinc­tion be­tween be­ing a good per­son and do­ing good

There is a distinction

Sup­pose Clare is on £30K and gives away £15K to AMF, while Flo is on £300K and gives away £30K. Clare is ar­guably a more vir­tu­ous per­son be­cause she has made a much big­ger per­sonal sac­ri­fice for oth­ers, de­spite the fact that Flo does more ab­solute good.

Now sup­pose Clare mis­tak­enly be­lieves that the most moral ac­tion pos­si­ble is to give the money to dis­aster re­lief. Plau­si­bly, Clare is still a more vir­tu­ous per­son than Flo be­cause she has made a huge per­sonal sac­ri­fice for what she be­lieved was right, and Flo has only made a small sac­ri­fice by com­par­i­son.

In a similar way peo­ple who make se­ri­ous sac­ri­fices to help the home­less in their area may be bet­ter peo­ple than EAs who do more ab­solute good by donat­ing.

The EA move­ment does ob­scure this distinction

-The dis­tinc­tion is ig­nored com­pletely when­ever an EA as­sesses an agent by calcu­lat­ing the good that is pro­duced by their ac­tions (or, more speci­fi­cally, by calcu­lat­ing the differ­ence be­tween what did hap­pen and what would have hap­pened if the per­son hadn’t lived). For ex­am­ple, William MacAskill claimed to find out the iden­tity of “The Best Per­son who Ever Lived” is us­ing this method[1].

-The dis­tinc­tion is ob­scured when EAs say am­bigu­ous things like “It’s bet­ter to be­come a banker and give away 10% of your in­come than to be­come a so­cial worker”. The sen­tence could mean that be­com­ing a banker pro­duces bet­ter out­comes, or that a per­son who be­comes a banker is more moral. The former claim is true but the lat­ter de­pends on the banker’s mo­ti­va­tions and may be false. They may have wanted to go into bank­ing any­way; and giv­ing away 10% may only be a small sac­ri­fice for them.

-We can also con­fuse the two sides of the dis­tinc­tion when we think about our own moral aims. We want to do as much good as pos­si­ble, and we eval­u­ate the ex­tent to which we suc­ceed. How­ever it’s easy to con­fuse failure in achiev­ing our moral aims with a failure to be a good per­son. This is a com­mon con­fu­sion. For ex­am­ple the father who can’t ad­e­quately feed his chil­dren, de­spite work­ing as hard as he could, has done noth­ing wrong yet still feels very guilty.

-Some of the dis­course within the move­ment ap­pears to im­ply that those who do the most good are, be­cause of this, the best peo­ple. For ex­am­ple, EAs who are earn lots of money, or are suc­cess­ful more gen­er­ally, are held in very high re­gard.
(Third post down on here https://​​www.face­book.com/​​groups/​​effec­tive.al­tru­ists/​​search/​​?query=matt%20wage)

How does ob­scur­ing this dis­tinc­tion con­tribute to EA’s image prob­lem?

Sup­pose non-EAs are not aware of the dis­tinc­tion, or think EAs are not aware. Then the EA move­ment will seem to be com­mit­ted to the claim that EAs are, in gen­eral, sig­nifi­cantly bet­ter peo­ple than non-EAs. But EAs of­ten live com­fortable lives with rel­a­tively low lev­els of per­sonal sac­ri­fice, so this is bound to make non-EAs an­gry. Even slight con­fu­sion over this is­sue can be very dam­ag­ing.

More speci­fi­cally:

· Smug and Ar­ro­gant – if EAs ap­pear to think that they’re much bet­ter peo­ple than ev­ery­one else, then they seem smug and arrogant

· Cold Hearted – if EAs ap­pear to think peo­ple who have a big­ger im­pact are more moral than those who make sig­nifi­cant per­sonal sac­ri­fices, this might comes across as cold hearted. If EAs find out how good a per­son some­one is by calcu­lat­ing their im­pact this may ap­pear cold hearted

· Rich-Per­son Mo­ral­ity – The rich and pow­er­ful can do much more good much more eas­ily. If peo­ple con­fuse do­ing good with be­ing good then they may think EAs be­lieve the rich and pow­er­ful can be good much more eas­ily.

I think the crit­i­cisms here are partly un­fair – EAs of­ten do recog­nise the dis­tinc­tion I’ve been talk­ing about. The prob­lem arises largely be­cause EA’s out­come-fo­cussed dis­course makes con­fu­sions about the dis­tinc­tion more likely to oc­cur.

How can we im­prove the situ­a­tion?

· -Dist­in­guish be­tween be­ing good and do­ing good when­ever there’s a risk of confusion

· -Be wary of post­ing ma­te­rial that im­plies that EAs are all amaz­ing peo­ple (For ex­am­ple the de­scrip­tion in EA Han­gout reads “Just to chill, have fun, and so­cial­ize with other EAs while we’re not busy sav­ing the world.” I know this is funny, but I think it’s po­ten­tially dam­ag­ing if it feeds into nega­tive stereo­types.)

· -Con­cep­tu­al­ise de­bates with op­po­nents not as about whether EAs are bet­ter peo­ple, but as about whether EAs have the cor­rect moral views

· -Con­sider ex­plic­itly say­ing that EA’s key claim is about which ac­tions are bet­ter not which agents are bet­ter.
E.g. “EAs do think that giv­ing to AMF is a bet­ter thing to do than giv­ing to the lo­cal home­less. But they don’t think that some­one who gives to AMF is nec­es­sar­ily a bet­ter per­son than some­one who gives to the home­less. It’s just that even bad peo­ple can do loads of good things if they give money away effec­tively.

Key Cause 2- Core parts of the EA move­ment are much eas­ier for rich, pow­er­ful and priv­ileged peo­ple to en­gage with

What as­pects, and why are they eas­ier?

GWWC’s pledge
Get­ting the cer­tifi­cate for mak­ing pledge is a sig­nifier of be­ing a good per­son. It is a sign that the pledger is mak­ing a sig­nifi­cant sac­ri­fice to help oth­ers. It is some­times used as a way of de­ter­min­ing whether some­one is an effec­tive al­tru­ist. But the pledge is much, much eas­ier to make if you a rich or fi­nan­cially se­cure in other ways. For ex­am­ple it’s harder to com­mit to giv­ing away 10% if your in­come is un­re­li­able, or you have large debts. So it’s eas­ier to gain moral cre­den­tials in the EA move­ment if you’re rich.

The ca­reers ad­vice of 80,000 hours
It’s fo­cussed on peo­ple at top uni­ver­si­ties, who have huge ed­u­ca­tional priv­ilege. Fol­low­ing the ad­vice is an im­por­tant way of en­gag­ing with the com­mu­nity’s ac­count of what to do, but many peo­ple can’t fol­low it at all.

Priv­ilege-friendly val­ues.
It’s much eas­ier for some­one who hasn’t ex­pe­rienced sex­ism to ac­cept that they should not de­vote their re­sources to fight­ing sex­ism, but to the most effec­tive cause. It’s much eas­ier for some­one who hasn’t been de­pressed, or de­voted a lot of time to helping a de­pressed friend, to ac­cept that it’s bet­ter to give to AMF than to char­i­ties that help the men­tally ill.

More gen­er­ally, be­ing priv­ileged makes it much eas­ier to se­lect causes based on only their effec­tive­ness. I think there’s a dan­ger peo­ple think we aren’t pri­ori­tis­ing their cause be­cause we’re not fully em­pathis­ing with a prob­lem that they’ve had first-hand ex­pe­rience of.

How does this con­tribute to EA’s image prob­lem?

· Rich-Per­son Mo­ral­ity – EA-re­lated moral ac­co­lades are much more easy for the rich and pow­er­ful to earn than any­one else.

· Priv­ileged Ac­cess – many parts of the move­ment are harder for non-priv­ileged peo­ple to en­gage in. They un­der­stand­ably feel that the move­ment is not ac­cessible to them.

I think the crit­i­cisms here are le­gi­t­i­mate. EA is much more ac­cessible and ap­peal­ing to those who are rich, pow­er­ful and priv­ileged. The image prob­lem here is ex­ac­er­bated by the fact that the de­mo­graphic of EA is hugely priv­ileged.

How can we im­prove the situ­a­tion?

· -Con­tinue to stress how much good any­one can do

· -Be sen­si­tive to your wealth, as com­pared with that of your con­ver­sa­tion part­ner, when talk­ing about EA. Urg­ing some­one on £30K to give 10% when you have £50K af­ter donat­ing may lead them to ques­tion why you’re ask­ing them to live on less than you live on.

· -Be sen­si­tive to your priv­ilege, as com­pared with that of your con­ver­sa­tion part­ner, when talk­ing about EA. Be aware that it may have been much eas­ier for you to change your cause pri­ori­ti­sa­tion than it is for them.

· -Ad­just the pledge so that, be­low a cer­tain in­come thresh­old, one can give less than 10%

· -Con­sider re­dress­ing the bal­ance of ca­reer advice

An objection

It might be in the in­ter­ests of the EA move­ment to carry on cel­e­brat­ing high-im­pact in­di­vi­d­u­als more than they de­serve. For a cul­ture where be­ing good is equated with do­ing good in­cen­tivises peo­ple to do more good. Th­ese in­cen­tives help the move­ment achieve its aims. It wants to do the most good, not to ac­cu­rately eval­u­ate how good peo­ple are.

For ex­am­ple, it might be false to write an ar­ti­cle say­ing that some Ukrainian man is the best per­son who ever lived. But the ar­ti­cle might en­courage its read­ers to do the most good they can. Some­times, when speak­ing to wealthy peo­ple, I pre­tend that I think they’d be moral heroes if they gave away 10% of their in­come.

Similarly it might be in the in­ter­ests of the move­ment to be par­tic­u­larly ac­cessible to the rich, pow­er­ful and priv­ileged. For these peo­ple have the po­ten­tial to do the most good. Many will feel un­com­fortable with this type of rea­son­ing though.

Th­ese con­sid­er­a­tions po­ten­tially provide ar­gu­ments against my sug­gested courses of ac­tion. We should think care­fully about the ar­gu­ments on both sides so that we can de­cide what to do.

Key Cause 3- EAs are be­lieved to be nar­row consequentialists

What is a nar­row con­se­quen­tial­ist?

A con­se­quen­tial­ist holds that the good­ness of an ac­tion is de­ter­mined by its con­se­quences. They com­pare the con­se­quences that ac­tu­ally hap­pened with those that would have hap­pened if the ac­tion hadn’t been performed. A nar­row con­se­quen­tial­ist only pays at­ten­tion to a small num­ber of types of con­se­quences, typ­i­cally things like plea­sure, pain or prefer­ence satis­fac­tion. A broad con­se­quen­tial­ist might also pay at­ten­tion to things like equal­ity, jus­tice, in­tegrity, promises kept and hon­est re­la­tion­ships.

Why do peo­ple be­lieve that EAs are nar­row con­se­quen­tial­ists?

It’s true!

Many effec­tive al­tru­ists, es­pe­cially those who are vo­cal and those in lead­er­ship po­si­tions in the move­ment, are nar­row con­se­quen­tial­ists. This isn’t sur­pris­ing. It’s very ob­vi­ous from the per­spec­tive of nar­row con­se­quen­tial­ism that EA is amaz­ing (FWIW I be­lieve it’s amaz­ing from many other per­spec­tives as well!). It’s also ob­vi­ous from this per­spec­tive that fur­ther­ing the EA move­ment is a re­ally great thing to do. Peo­ple with other eth­i­cal codes might have other con­sid­er­a­tions that com­pete with their com­mit­ment to fur­ther­ing the EA move­ment.

The pop­u­lar­ity of the con­se­quences-based ar­gu­ment for EA

This ar­gu­ment runs as fol­lows: “You could do a huge amount of good for oth­ers at a tiny cost to your­self. So do it!”
This is ex­actly the ar­gu­ment a nar­row con­se­quen­tial­ist would make, so peo­ple in­fer that EAs are con­se­quen­tial­ists. This in­fer­ence is some­what un­fair, as it’s a pow­er­ful ar­gu­ment even if one isn’t a con­se­quen­tial­ist.

EA discourse

Some dis­cus­sion on EA fo­rums im­plic­itly pre­sup­poses con­se­quen­tial­ism. Some­times EAs at­tack moral com­mit­ments that a nar­row con­se­quen­tial­ist doesn’t hold: they equate them to “car­ing about ab­stract prin­ci­ples”, claim that those who hold them are sim­ply ra­tio­nal­is­ing im­moral be­havi­our, and even as­sert that philoso­phers aren’t nar­row con­se­quen­tial­ists only be­cause they want to keep their jobs!

How does this be­lief about EAs con­tribute to EA’s image prob­lem?

· Smug and Ar­ro­gant– EAs are com­mit­ted nar­row con­se­quen­tial­ists even when the vast ma­jor­ity of ex­perts dis­miss it. This is dog­matic and ar­ro­gant, with­out in­de­pen­dent rea­son to think the ex­perts are wrong (in­de­pen­dent of the moral ar­gu­ments for and against nar­row con­se­quen­tial­ism).

· Cold Hearted – if EAs are nar­row con­se­quen­tial­ists then they will over sim­plify moral is­sues by ig­nor­ing rele­vant con­sid­er­a­tions, like jus­tice or hu­man rights. It might seem like it is this over­sim­plifi­ca­tion that al­lows EAs to figure out what to do by calcu­lat­ing the an­swer.

o This plays into the idea that EAs make calcu­la­tions be­cause they are cold hearted

o At worst peo­ple think that EA’s strong line on which char­i­ties one should donate to is a re­sult of their nar­row con­se­quen­tial­ism and thus of their cold heartedness

I think both crit­i­cisms here are too harsh on EA but do find it sur­pris­ing that many avowedly ra­tio­nal EAs are so strongly com­mit­ted to nar­row con­se­quen­tial­ism.

How can we im­prove the situ­a­tion?

· -Don’t see EA as be­ing a moral the­ory where there is “some­thing that an EA would do” in ev­ery situ­a­tion. Rather see EA’s ap­peal as be­ing in­de­pen­dent of what un­der­ly­ing moral the­ory you agree with

· -Refrain from post­ing things that as­sume that con­se­quen­tial­ism is true

· -Don’t just use the con­se­quences-based ar­gu­ment for EA

This last point is par­tic­u­larly im­por­tant be­cause the other ar­gu­ments are re­ally strong as well:

· -The drown­ing child ar­gu­ment first as­serts that you should save the drown­ing child (it needn’t say why). Then it as­serts that var­i­ous con­sid­er­a­tions aren’t morally rele­vant. This ar­gu­ment isn’t con­se­quen­tial­ist, but should ap­peal to all eth­i­cal po­si­tions.

· -The ar­gu­ment from jus­tice make use of the fact that many causes recom­mended by EA help those who
i) are in des­per­ate poverty
ii) are in this po­si­tion through no fault of their own
iii) are of­ten poor for the same rea­son we’re rich
The ar­gu­ment points out that this state of af­fairs is ter­ribly un­just, and in­fers that we have a very strong rea­son to change it. I think it’s a hugely pow­er­ful ar­gu­ment, but it’s not clear that it can even be made by a nar­row con­se­quen­tial­ist.

Key Cause 4 - EAs dis­course is alienating

How?

Ter­minol­ogy from eco­nomics and philos­o­phy is of­ten used even when it’s not strictly needed. This makes the con­ver­sa­tion in­ac­cessible to many.

EAs are very keen to be ra­tio­nal when they write, and to be re­garded as such. This can cre­ate an in­timi­dat­ing at­mo­sphere to post in.

EAs of­ten cel­e­brate the fact that the move­ment is “ra­tio­nal”. But “ra­tio­nal” is a nor­ma­tive word, mean­ing some­thing like “the cor­rect view to have given the ev­i­dence”. Thus we ap­pear to be pat­ting our­selves on the back a lot. This is alienat­ing and an­noy­ing to some­one who doesn’t yet agree with us.

How does this con­tribute to EA’s image prob­lem?

· Smug and Ar­ro­gant- claiming that your po­si­tion is ra­tio­nal is smug be­cause it like say­ing that you have great views. This can come across as ar­ro­gant in a con­ver­sa­tion be­cause it might seem like you’re not se­ri­ously en­ter­tain­ing the pos­si­bil­ity that you’re wrong.

· Cold Hearted – use of tech­ni­cal lan­guage makes the move­ment ap­pear impersonal

· Priv­ileged Ac­cess – EA is less ac­cessible to those that don’t know the rele­vant vo­cab­u­lary or who aren’t con­fi­dent. This tends to be those with­out ed­u­ca­tional and so­cio-eco­nomic privilege

I think these crit­i­cisms are mostly le­gi­t­i­mate, but also think that EAs are very friendly and in­clu­sive in gen­eral.

How can we im­prove the situ­a­tion?

· -Avoid alienat­ing dis­course. Fol­low Ge­orge Or­well’s writ­ing rules: never use a longer word when a shorter one will do; if it’s pos­si­ble to cut out a word then do; and Never use a for­eign phrase, a sci­en­tific word, or a jar­gon word if you can think of an ev­ery­day English equiv­a­lent.

· -Avoid words like “ra­tio­nal” which im­ply a su­pe­ri­or­ity of EAs over oth­ers

Conclusion

Be scep­ti­cal of my ar­gu­ments here, and re­ply with your ob­jec­tions, but also be scep­ti­cal of your cur­rent prac­tices. Think about some ways in which you could bet­ter com­bat EA’s image prob­lem.



[1] http://​​swara­jya­mag.com/​​cul­ture/​​the-best-per­son-who-ever-lived-is-an-un­known-ukrainian-man/​​