Introducing CEA’s Guiding Principles

Over the last year, I’ve given a lot of thought to the ques­tion of how the effec­tive al­tru­ism com­mu­nity can stay true to its best el­e­ments and avoid prob­lems that of­ten bring move­ments down. Failure is the de­fault out­come for a so­cial move­ment, and so we should be proac­tive in in­vest­ing time and at­ten­tion to help the com­mu­nity as a whole flour­ish.

In a pre­vi­ous post, I noted that there’s very lit­tle in the way of self-gov­ern­ing in­fras­truc­ture for the com­mu­nity. There’s very lit­tle to deal with peo­ple rep­re­sent­ing EA in ways that seem to be harm­ful; this means that the only re­sponse is com­mu­nity ac­tion, which is slow, un­pleas­ant for all in­volved, and risks un­fair­ness through lack of good pro­cess. In that post, I sug­gested we cre­ate two things: (i) a set of guid­ing prin­ci­ples agreed upon by all of EA; (ii) a com­mu­nity panel that could make recom­men­da­tions to the com­mu­nity re­gard­ing vi­o­la­tions of those prin­ci­ples.

There was healthy dis­cus­sion of this idea, both on the fo­rum and in feed­back that we sought from peo­ple in the com­mu­nity. Some par­tic­u­larly im­por­tant wor­ries, it seemed to me, were: (i) the risk of con­soli­dat­ing too much in­fluence over EA in any one or­gani­sa­tion or panel; (ii) the risk of it be­ing im­pos­si­ble to get agree­ment, lead­ing to an in­crease in poli­ti­ci­sa­tion and squab­bling; (iii) the risk of los­ing flex­i­bil­ity by en­forc­ing what is an “EA view” or not (in a way that other broad move­ments don’t do*). I think these were im­por­tant con­cerns. In re­sponse, we toned back the am­bi­tions of the pro­posed ideas.

In­stead of try­ing to cre­ate a doc­u­ment that we claim to rep­re­sent all of EA, en­forced by a com­mu­nity panel as I sug­gested, we’ve done two things:

(i) Writ­ten down CEA’s un­der­stand­ing of EA (based in part on dis­cus­sion with other com­mu­nity mem­bers), and in­vited other or­gani­sa­tions to share and up­hold that un­der­stand­ing if they found it matched their views. This will be­come a com­mu­nity-wide vi­sion only to the ex­tent that it res­onates with the com­mu­nity.

(ii) Created a small ad­vi­sory panel of com­mu­nity mem­bers that will provide in­put on im­por­tant and po­ten­tially con­tro­ver­sial com­mu­nity-rele­vant de­ci­sions that CEA might have to make (such as when we changed the Giv­ing What We Can pledge to be cause-neu­tral). The ini­tial panel mem­bers will be Alexan­der Gor­don-Brown, Peter Hur­ford, Claire Za­bel, and Ju­lia Wise.

The panel, in par­tic­u­lar, is quite differ­ent from my origi­nal pro­posal. In the origi­nal pro­posal, it was a way of EA self-reg­u­lat­ing as a com­mu­nity. In this new form, it’s a way of en­sur­ing that some of CEA’s de­ci­sions get ap­pro­pri­ate in­put from the com­mu­nity. Ju­lia Wise, who serves as com­mu­nity li­ai­son at CEA, has put to­gether the ad­vi­sory panel and has writ­ten about this panel here. The rest of this post is about how CEA un­der­stands EA and what guid­ing prin­ci­ples it finds ap­pro­pri­ate.

How CEA un­der­stands EA is given in its Guid­ing Prin­ci­ples doc­u­ment. I’ve also copied and pasted the con­tents of this doc­u­ment be­low.

Even if few or­gani­sa­tions or peo­ple were to en­dorse this un­der­stand­ing of EA, it would still have a use­ful role. It would:

  • Help oth­ers to un­der­stand CEA’s mis­sion better

  • Help vol­un­teers who are helping to run CEA events to un­der­stand the val­ues by which we’d like those events to be run

  • Create a shared lan­guage by which CEA can be held ac­countable by the community

How­ever, we hope that the defi­ni­tion and val­ues are broad enough that the large ma­jor­ity of the EA com­mu­nity will be on board with them. And in­deed, a num­ber of EA or­gani­sa­tions (or lead­ers of EA or­gani­sa­tions) have already en­dorsed this un­der­stand­ing (see the bot­tom of this post). If this un­der­stand­ing of EA were widely adopted, I think there could be a num­ber of benefits. It could help new­com­ers, in­clud­ing aca­demics and jour­nal­ists, to get a sense of what EA is about. It could help avoid dilu­tion of EA (such that donat­ing $5/​month to a char­ity with low over­heads be­comes ‘effec­tive al­tru­ism’) or cor­rup­tion of the idea EA (such as EA = earn­ing to give to donate to RCT-backed char­i­ties, and noth­ing else). It might help cre­ate com­mu­nity co­he­sion by stat­ing, in broad terms, what brings us all to­gether (even if many of us fo­cus on very differ­ent ar­eas). And it might give us a shared lan­guage for dis­cussing prob­le­matic events hap­pen­ing in the com­mu­nity. In gen­eral, I think if we all up­held these val­ues, we’d cre­ate a very pow­er­ful force for good.

There is still a risk of hav­ing a widely-agreed-upon set of val­ues, which is that effec­tive al­tru­ism could os­sify or be­come un­duly nar­row. How­ever, I hope that the open­ness of the defi­ni­tion and val­ues (and lack of en­force­ment mechanism be­yond com­mu­nity norms) should min­imise that risk.

Here is the text of the doc­u­ment:

The Cen­tre for Effec­tive Altru­ism’s un­der­stand­ing of effec­tive al­tru­ism and its guid­ing principles

What is effec­tive al­tru­ism?

Effec­tive al­tru­ism is about us­ing ev­i­dence and rea­son to figure out how to benefit oth­ers as much as pos­si­ble, and tak­ing ac­tion on that ba­sis.

What is the effec­tive al­tru­ism com­mu­nity?

The effec­tive al­tru­ism com­mu­nity is a global com­mu­nity of peo­ple who care deeply about the world, make benefit­ing oth­ers a sig­nifi­cant part of their lives, and use ev­i­dence and rea­son to figure out how best to do so.

Put­ting effec­tive al­tru­ism into prac­tice means act­ing in ac­cor­dance with its core prin­ci­ples:

The guid­ing prin­ci­ples of effec­tive al­tru­ism:

Com­mit­ment to Others:

We take the well-be­ing of oth­ers very se­ri­ously, and are will­ing to take sig­nifi­cant per­sonal ac­tion in or­der to benefit oth­ers. What this en­tails can vary from per­son to per­son, and it’s ul­ti­mately up to in­di­vi­d­u­als to figure out what sig­nifi­cant per­sonal ac­tion looks like for them. In each case, how­ever, the most es­sen­tial com­mit­ment of effec­tive al­tru­ism is to ac­tively try to make the world a bet­ter place.

Scien­tific Mind­set:

We strive to base our ac­tions on the best available ev­i­dence and rea­son­ing about how the world works. We recog­nise how difficult it is to know how to do the most good, and there­fore try to avoid over­con­fi­dence, to seek out in­formed cri­tiques of our own views, to be open to un­usual ideas, and to take al­ter­na­tive points of view se­ri­ously.

Open­ness:

We are a com­mu­nity united by our com­mit­ment to these prin­ci­ples, not to a spe­cific cause. Our goal is to do as much good as we can, and we eval­u­ate ways to do that with­out com­mit­ting our­selves at the out­set to any par­tic­u­lar cause. We are open to fo­cus­ing our efforts on any group of benefi­cia­ries, and to us­ing any rea­son­able meth­ods to help them. If good ar­gu­ments or ev­i­dence show that our cur­rent plans are not the best way of helping, we will change our be­liefs and ac­tions.

In­tegrity:

Be­cause we be­lieve that trust, co­op­er­a­tion, and ac­cu­rate in­for­ma­tion are es­sen­tial to do­ing good, we strive to be hon­est and trust­wor­thy. More broadly, we strive to fol­low those rules of good con­duct that al­low com­mu­ni­ties (and the peo­ple within them) to thrive. We also value the rep­u­ta­tion of effec­tive al­tru­ism, and rec­og­nize that our ac­tions re­flect on it.

Col­lab­o­ra­tive Spirit:

We af­firm a com­mit­ment to build­ing a friendly, open, and wel­com­ing en­vi­ron­ment in which many differ­ent ap­proaches can flour­ish, and in which a wide range of per­spec­tives can be eval­u­ated on their mer­its. In or­der to en­courage co­op­er­a­tion and col­lab­o­ra­tion be­tween peo­ple with widely vary­ing cir­cum­stances and ways of think­ing, we re­solve to treat peo­ple of differ­ent wor­ld­views, val­ues, back­grounds, and iden­tities kindly and re­spect­fully.

The fol­low­ing or­ga­ni­za­tions wish to voice their sup­port for these defi­ni­tions and guid­ing prin­ci­ples:

  • .impact

  • 80,000 Hours

  • An­i­mal Char­ity Evaluators

  • Char­ity Science

  • Effec­tive Altru­ism Foundation

  • Foun­da­tional Re­search Institute

  • Fu­ture of Life Institute

  • Rais­ing for Effec­tive Giving

  • The Life You Can Save

Ad­di­tion­ally, some in­di­vi­d­u­als voice their sup­port:

  • Elie Hassen­feld of GiveWell and the Open Philan­thropy Project

  • Holden Karnofsky of GiveWell and the Open Philan­thropy Project

  • Toby Ord of the Fu­ture of Hu­man­ity Institute

  • Peter Singer

  • Nate Soares of the Ma­chine In­tel­li­gence Re­search In­sti­tute

This doesn’t rep­re­sent an ex­haus­tive list of all or­gani­sa­tions or peo­ple in­volved with effec­tive al­tru­ism. We want to in­vite any other or­gani­sa­tions to en­dorse the above guid­ing prin­ci­ples if they wish by writ­ing us at hello@cen­tre­fore­ffec­tivealtru­ism.org.

Ju­lia and I want to thank all the many peo­ple who helped de­velop this doc­u­ment, with par­tic­u­lar thanks to Rob Bens­inger, Jeff Als­tott, and Hilary May­hew who went above and be­yond in pro­vid­ing com­ments and sug­gested word­ing.