Viewing Effective Altruism as a System

Meta-EA is most of­ten char­ac­ter­ised in terms of dis­crete units such as dol­lars and in­di­vi­d­u­als. How many peo­ple can we re­cruit, how much will they donate, how many peo­ple can we train to be AI re­searchers? This ap­proach car­ries a lot of value, par­tic­u­larly when we wish to craft met­rics to eval­u­ate our work. At the same time, some­times it is bet­ter to view Effec­tive Altru­ism as a sys­tem, to look at it holis­ti­cally.

I be­lieve that the pri­mary goal of meta-EA should be achiev­ing im­pact through im­prov­ing EA as a sys­tem. Our start­ing place should be differ­ent the­o­ries of how we could do this and met­rics should come sec­ond, as a way of differ­en­ti­at­ing be­tween differ­ent plans of ac­tion and test­ing hy­pothe­ses. I’m not sug­gest­ing that quan­ti­tive facts should be ig­nored dur­ing the hy­poth­e­sis gen­er­a­tion stage, just that we need to un­der­stand the hy­poth­e­sis space be­fore we can choose ap­pro­pri­ate met­rics, oth­er­wise we may ar­tifi­cially limit the set of the­o­ries that we con­sider.

In par­tic­u­lar, we need to recog­nise that some­times a sys­tem is more than the sum of its parts. Effec­tive Altru­ism is one such sys­tem, since the var­i­ous parts of the move­ment tend to make the other parts work more effec­tive. This ar­ti­cle will give a brief sum­mary of how effec­tive al­tru­ism works as a sys­tem. Please note that this dis­cus­sion will not just in­clude offi­cial EA orgs, but some EA al­igned orgs as well.

The Effec­tive Altru­ism Eco-sys­tem:

This sec­tion di­vides up the var­i­ous parts of the EA eco-sys­tem by func­tion. You may want to skim over this sec­tion if you already have a good un­der­stand­ing of the eco-sys­tem, as oth­er­wise you’ll just be read­ing things that you already know.

Cen­ter for Effec­tive Altru­ism (CEA)/​Lo­cal Effec­tive Altru­ism Net­work (LEAN): Fo­cuses on move­ment build­ing and guid­ing the EA move­ment gen­er­ally, in­clud­ing writ­ing ar­ti­cles and send­ing out the newslet­ter.

Open Philan­thropy/​Giv­ing What We Can Pledge/​Founder’s Pledge/​Effec­tive Altru­ism Funds/​Rais­ing for Effec­tive Giv­ing: Pro­vides fund­ing for the causes we sup­port, as well as for the var­i­ous other orgs here as well. CEA: Funds lo­cal groups. Effec­tive Altru­ism Funds: Pro­vides fund­ing for smaller pro­jects. Givewell In­cu­ba­tion Grants: Sup­ports po­ten­tial new top char­i­ties.

Effec­tive Altru­ism Global/​EAGx: Spreads ideas within the EA move­ment and pro­vides net­work­ing op­por­tu­ni­ties.

Less Wrong/​Cen­ter for Ap­plied Ra­tion­al­ity/​Broader ra­tio­nal­sphere: Pro­vides tools for think­ing more clearly (epistemic ra­tio­nal­ity) and for be­ing more effec­tive (ap­plied ra­tio­nal­ity).

Lo­cal EA groups/​SHIC: Re­cruits peo­ple into the move­ment who donate or who join orgs, as well as de­vel­op­ing them as EAs and of­ten pro­vid­ing a so­cial group. In par­tic­u­lar, lo­cal groups are pre­sent at many of the world’s most pres­ti­gious uni­ver­si­ties, in­clud­ing Oxford, Cam­bridge, Stan­ford, Yale, Har­vard, Prince­ton, MIT.

EA Bay Area Hub: Big enough to de­serve it’s own point. Con­nects us with/​helps us re­cruit from the tech scene. Brings enough EAs to­gether in the one place that it is likely that peo­ple can find other EAs also in­ter­ested in the same thing.

80,000 hours: Pro­vides ca­reer ad­vice, as well as helping effec­tive orgs fill va­can­cies.

EA Fo­rum/​var­i­ous Face­book groups: Allow the shar­ing of ideas globally

Global Pri­ori­ties In­sti­tute: New re­search in­sti­tute at Oxford broadly ex­am­in­ing EA. Does not just perform re­search, also pro­vides aca­demic cred­i­bil­ity. There are a whole of re­search in­sti­tutes for spe­cific causes such as Fu­ture of Hu­man­ity In­sti­tute, Cen­ter for the Study of Ex­is­ten­tial Risk, Foun­da­tional Re­search In­sti­tute, Wild An­i­mal Suffer­ing Re­search, ect.

Givewell/​Open Philan­thropy Pro­ject/​An­i­mal Char­ity Eval­u­a­tors: Char­ity eval­u­a­tors for differ­ent causes.

Char­ity Science: Re­search into po­ten­tial new top char­i­ties and as­sists peo­ple who want to cre­ate them.

In­ter­ac­tion Effects:

We can see sev­eral ways in which the ex­is­tence of a broader eco-sys­tem makes cer­tain tasks much more worth­while. For ex­am­ple, sup­pose you see an idea for an effec­tive char­ity on Char­ity Science. You con­tact them and they provide you with ad­vice and link you up with po­ten­tial cofounders. Givewell pro­vides you with an in­cu­ba­tion grant, which you use to hire some staff who were referred through 80,000 hours so that you can run a pi­lot. Givewell eval­u­ates you and you be­come a top char­ity. Var­i­ous Giv­ing What We Can mem­bers donate to you and OpenPhil pro­vides you with sig­nifi­cant sup­port. Given the in­her­ent difficul­ties of char­ity en­trepreneur­ship, it would plau­si­bly only take a sin­gle part of this pipeline to be miss­ing in or­der to de­rail the whole pro­ject and make all the other efforts worth­less.

There are many other in­ter­ac­tion effects as well. For ex­am­ple, it is much more valuable for the Global Pri­ori­ties In­sti­tute to do re­search if there is a global move­ment that will try to put the ideas in ac­tion. The Founder’s pledge is much more valuable with Give Well and Open Philan­thropy ex­ist­ing, since they provide it with re­search which it can pass on to founders to help them give more effec­tively. Fur­ther 80,000 hours is much more effec­tive when there are meet ups at top uni­ver­si­ties to re­fer peo­ple for coach­ing.


The main pur­pose of this post is to en­courage more peo­ple to adopt a more holis­tic way of look­ing at Effec­tive Altru­ism that may lead to fur­ther ideas of worth­while pro­jects. Nonethe­less, I do want to make a few sug­ges­tions about ap­pli­ca­tion:

  • Once you have a map of the EA ecosys­tem (as above), you can start think­ing of differ­ent pipelines: be­com­ing an AI re­searcher, start­ing a new char­ity, ob­tain­ing a fi­nan­cial job for earn­ing to give. You can look for gaps in the pipeline and con­sider whether the gap might be worth filling or whether the cure is worse than the dis­ease.

  • One of the great­est difficul­ties is figur­ing out how we should han­dle co-or­di­na­tion within the move­ment. If we just ex­am­ine our marginal im­pact based on the sta­tus quo re­main­ing the same, we will be ig­nor­ing any im­prove­ments in the effi­cien­cies of other com­po­nents or the effects of new com­po­nents be­ing added to the sys­tem. In par­tic­u­lar, some com­po­nents may be in­cred­ibly valuable if all of them ex­ist, but have min­i­mal value on their own. For ex­am­ple, Givewell can in­crease donor’s effec­tive­ness by a fac­tor of ten, but in a world where ei­ther no­body had heard of them or no­body listened to them, this com­po­nent would not be valuable by it­self. This is not an easy prob­lem and I don’t re­ally know how to ad­dress this, but it is plau­si­ble that all of the high­est im­pacts come from com­bi­na­tions of com­po­nents which each in­crease their effec­tive­ness.