Benefits of EA engaging with mainstream (addressed) cause areas

With this post I wanted to ask a fairly ba­sic ques­tion of the EA com­mu­nity that I’ve been scratch­ing my head over.

Is Effec­tive Altru­ism un­der­valu­ing the net im­pact of re­pairing tra­di­tional im­pact prob­lem ar­eas (i.e. global dev) com­pared to fo­cus­ing on new or un­ad­dressed prob­lem ar­eas?

I think that this fo­rum in gen­eral could use more imagery /​ graph­ics, and so I’ll at­tempt to make my point with some graphs.

Con­sider first this graph, with ‘Amount of Cap­i­tal Distributed’ on Y-axis and ‘Effi­ciency of Im­pact’ on the X-axis:

This is how I imag­ine some might view the so­cial sec­tor, which is to say ev­ery sin­gle or­ga­ni­za­tion or cause ad­dress­ing ev­ery sin­gle im­pact area, placed on a spec­trum. At the be­gin­ning of the curve, down and to the left, we see that there is a smaller amount of cap­i­tal cir­cu­lat­ing through ap­proaches that aren’t that effec­tive. In the mid­dle of the curve we see the bulk of ap­proaches, with mod­er­ate im­pact and the most amount of cap­i­tal at play. And fi­nally to the right we start to see ap­proaches that would fall un­der the ban­ner of Effec­tive Altru­ism. They wield less cap­i­tal than tra­di­tional sources of im­pact, but are quite im­pact­ful in do­ing so.

The logic be­hind the slope of this curve is that there is a cer­tain Over­ton win­dow of al­tru­ism. Ap­proaches that are too re­gres­sive will start to leave the win­dow and re­ceive less cap­i­tal. Ap­proaches that are at the peak of so­ciety’s at­ten­tion will re­ceive the most sup­port. Those at the bleed­ing edge (EA) will only be per­cep­ti­ble by a small sub­set of the pop­u­la­tion and re­ceive smaller lev­els of sup­port.

Once this ba­sic curve is es­tab­lished we can look at what we ac­tu­ally know about the im­pact land­scape and start to re­fine the graph.

This next graph ditches the curve and in­stead in­tro­duces a bar chart. The same ba­sic com­par­i­son of Cap­i­tal vs. Im­pact still ap­plies. Here the main differ­ence is that differ­ent ap­proaches don’t ex­ist on a spec­trum and in­stead are dis­crete.

This might seem like a minor dis­crep­ancy but it re­veals an im­por­tant point about how causes are funded. If any­thing, Effec­tive Altru­ism shows us that any ac­tion can have var­i­ous de­grees of im­pact, in many differ­ent ways and in differ­ent cat­e­gories. Th­ese re­la­tion­ships are in­cred­ible messy. At the same time, cap­i­tal, es­pe­cially philan­thropic cap­i­tal, is rarely dis­tributed pro­por­tional to im­pact and ag­nos­tic of prob­lem ar­eas. Mat­ter of fact, the op­po­site is prob­a­bly true. First, donors com­monly pick a prob­lem area and set of or­ga­ni­za­tions that they are per­son­ally swayed by, and then make iso­lated dona­tions within this cat­e­gory with the hope that they can achieve im­pact. Even foun­da­tions such as the Rock­efel­ler Foun­da­tion that are de­voted to broad goals like “pro­mot­ing the well-be­ing of hu­man­ity through­out the world” have fo­cus ar­eas and pet is­sues that they like to fund more than oth­ers.

So ul­ti­mately a bet­ter way to think about the dis­tri­bu­tion re­la­tion­ship be­tween im­pact and cap­i­tal is prob­a­bly not a nice smooth curve, but around spe­cific chunks of cap­i­tal re­lated to cause or prob­lem ar­eas (even if in re­al­ity it doesn’t quite work like this).

Fur­ther­more, the key in ad­dress­ing the al­tru­is­tic cap­i­tal mar­kets via chunks in­stead of as a con­tin­u­ous im­pact curve is that you be­gin to see the or­ders of scale that sep­a­rates differ­ent cat­e­gories:

Here we see sev­eral cat­e­gories of im­pact, charted via their ex­act an­nual ex­pen­di­ture lev­els and loose rank­ing of their QALY/​$ lev­els. De­spite not hav­ing the abil­ity to make ac­cu­rate es­ti­ma­tions of QALY/​$ lev­els, the differ­ence in mag­ni­tude be­tween these cat­e­gories in terms of ex­pen­di­tures hope­fully is clear. Even tak­ing the most gen­er­ous es­ti­ma­tion of the an­nual ex­pen­di­tures of ex­plic­itly-EA causes (~$500M), we see that this is a drop in the bucket com­pared to the >$100 Billion that just the UN Sys­tem and the large NGO BRAC use each year.

This brings me to my cen­tral point and ques­tion for the EA com­mu­nity. Is there an ar­gu­ment to be made for fo­cus­ing more efforts on more effi­ciently re­tool­ing these large sources of cap­i­tal to­wards po­si­tions that would be EA-al­igned?

I would imag­ine some ob­jects to this ar­gu­ment might be:

- The whole idea of x-risks is that pour­ing even just a lit­tle at­ten­tion and money into them can help miti­gate catas­trophic risks that would oth­er­wise hap­pen un­der busi­ness as usual. This is true even if there are more su­perfi­cially press­ing prob­lems to deal with in the world like poverty.

- Fo­cus­ing efforts into already ad­dressed prob­lem ar­eas doesn’t just im­me­di­ately yield clear im­pact, and could ac­tu­ally prove a fu­tile ac­tivity.

- EA al­igned or­ga­ni­za­tions like Ev­i­dence/​Ac­tion and the var­i­ous U.S. Policy Re­form pro­jects are in fact already ad­dress­ing ‘tra­di­tional’ im­pact ar­eas, but just the ones that have the high­est up­side po­ten­tial.

I think all these points would be valid, but I want to raise some coun­ter­points that I think make the broad ar­gu­ment here still worth­while to ex­plore.

1. Even ad­dressed prob­lems can be ad­dressed inefficiently

A com­mon line of think­ing when eval­u­at­ing EA-friendly causes is to de­ter­mine which causes have the least amount of at­ten­tion placed on them. Past the po­ten­tial bi­ases that come about when you go about the world look­ing for prob­lems, I worry about this ap­proach’s em­pha­sis on nov­elty.

It seems like there’s not enough em­pha­sis on the qual­ity of fund­ing and at­ten­tion be­ing placed on an is­sue, com­pared to the quan­tity of fund­ing and at­ten­tion.

For cli­mate change, I think the EA jus­tifi­ca­tion of not spend­ing time and re­sources on this prob­lem makes sense. Even if the prob­lem car­ries catas­trophic con­se­quences, there is quite a lot of fairly high qual­ity re­search and de­vel­op­ment be­ing done here, both from for profit and non profit per­spec­tives.

For global dev broadly speak­ing, and for sub cat­e­gories like global health, most of EA’s en­gage­ment seems to be around a set of in­ter­ven­tions that have stacked QALY/​$ ra­tios like early-life af­ford­able health­care. Past this though I get the im­pres­sion that other sub cat­e­gories of aid are writ­ten off as not wor­thy of at­ten­tion be­cause they are already be­ing ad­dressed. This is un­der­stand­able, as we see from the chart above that there is a large amount of cap­i­tal that goes to­wards hu­man­i­tar­ian causes.

But de­spite the hun­dreds of billions of dol­lars that flow through aid each year, it’s un­clear how im­pact­ful this aid is. Ob­vi­ously an ar­gu­ment can be made to­wards the short term effec­tive­ness of pro­vid­ing ser­vices for truly acute hu­man­i­tar­ian crises. But long term per­spec­tives like those con­tained within Dead Aid state that aid is fun­da­men­tally harm­ful. Moder­ate po­si­tions state at least that there needs to be bet­ter link­ages be­tween in­ter­ven­tions and their long term im­pact.

EAs have shown a slight in­ter­est via orgs like Ev­i­dence/​Ac­tion to try to im­prove the effec­tive­ness of tra­di­tional aid ap­proaches, but I think that this is a prob­lem that is wor­thy of at least as much at­ten­tion as re­form­ing poli­ti­cal in­sti­tu­tions. If it is in fact the case that there are glar­ing in­effi­cien­cies in this sec­tor, and that trillions of dol­lars are locked up pur­su­ing this in­effi­cient work, fix­ing these prob­lems could prove to have mas­sive up­side. First and fore­most though it seems im­per­a­tive to at least get a bet­ter sense of how effec­tive these grand­fathered cap­i­tal chunks are.

2. There are nu­mer­ous ad­van­tages of bet­ter in­te­grat­ing EA com­mu­nity with rest of so­cial sec­tor

Another up­side of work­ing to im­prove causes that might oth­er­wise be viewed as be­ing already ad­dressed is that it forces greater in­ter­ac­tion be­tween the EA com­mu­nity and the rest of the so­cial sec­tor.

Be­fore learn­ing about Effec­tive Altru­ism I was work­ing for a so­cial en­ter­prise that worked with house­hold name foun­da­tions on a va­ri­ety of causes. Even at its rel­a­tively small stage of growth sev­eral years ago, I was sur­prised to see that such a ro­bust com­mu­nity was form­ing around do­ing the most good pos­si­ble. But what was most sur­pris­ing about the EA com­mu­nity wasn’t just how ac­tive it was, it was how dis­crete it was from the world I was work­ing in, de­spite hav­ing es­sen­tially the same goals.

More­over, I was in­creas­ingly see­ing a move­ment in Foun­da­tion World to­wards bet­ter frame­works around un­der­stand­ing and re­port­ing on net im­pact. While EA takes this idea to an ex­treme I didn’t un­der­stand why this com­mu­nity needed to be so re­moved from the con­ver­sa­tions (and ac­cess to cap­i­tal) that were si­mul­ta­neously hap­pen­ing in other parts of the so­cial sec­tor.

Be­sides avoid­ing the du­pli­ca­tion of efforts I think there are valuable les­sons that the EA com­mu­nity and the other im­pact-chasers could learn from one an­other. For ex­am­ple, EAs are uniquely good at un­der­stand­ing the role of tech­nol­ogy in the fu­ture, which is a no­to­ri­ous weak­ness of tra­di­tional so­cial sec­tor folks. On the other hand, I think so­cial sec­tor folks could teach a thing or two to EAs about how pro­grams work ‘in the field’ and what philan­thropy looks like out­side of the ivory tower that EAs can some­times sit in.

Fi­nally, I was read­ing some­where on this fo­rum re­cently a post that was about how EA is a set of be­liefs and ap­proaches, and shouldn’t as­pire to be a group or move­ment (can’t find the post). I agree with this sen­ti­ment, but at this point Effec­tive Altru­ism as a move­ment is a run­away train.

Part of em­brac­ing this re­al­ity means un­der­stand­ing bet­ter the role of op­tics, and how pub­lic per­cep­tion af­fects EA’s over­ar­ch­ing goals. Maybe at the mo­ment the EA philos­o­phy isn’t quite ‘main­stream,’ and maybe this mono­lithic sta­tus is a naive goal to reach. But speak­ing prac­ti­cally, the more peo­ple who op­er­ate un­der the ban­ner of EA, the more good can be done in the world. This pro­cess en­tails both at­tract­ing new mem­bers to­wards what EA stands for to­day, but also be­ing more in­te­gra­tive with com­mu­ni­ties that wouldn’t tra­di­tion­ally al­ign them­selves EA. Want­ing to do the most good pos­si­ble is truly an ag­nos­tic trait. EA as a move­ment should be equally ag­nos­tic about not just what causes it con­sid­ers, but what tribes it al­igns it­self with.

3. A vast amount of philan­thropic cap­i­tal in the world is and will always be dis­tributed ‘ir­ra­tionally,’ EA has much to gain by em­brac­ing and work­ing around this.

As dis­cussed in #1 and 2 there is no short­age of prob­lem ar­eas that are be­ing ap­proached im­perfectly, at least rel­a­tive to the bench­marks of Effec­tive Altru­ism. A large part of this no doubt is that global im­pact is not usu­ally product of the pure (ra­tio­nal, self­less) defi­ni­tion of al­tru­ism. Among other things, peo­ple donate to causes they per­son­ally feel at­tached to. There is a deep psy­cholog­i­cal (evolu­tion­ary, likely) mechanism that un­der­pins this, one that prob­a­bly won’t be chang­ing any time soon.

In the eyes of EAs, these im­perfect causes don’t always seem to have tan­gible con­nec­tions to im­pact, and as a re­sult this com­mu­nity doesn’t en­gage with them. This dis­en­gage­ment makes sense for some ‘warm glow’ forms of al­tru­ism that have struc­tural bar­ri­ers in place pre­vent­ing them from ever be­com­ing more effi­cient. But for other forms of im­pact, just be­cause they are in­effi­cient now doesn’t mean they can’t im­prove.

En­gag­ing with these causes fur­ther (once again, a good ex­am­ple be­ing global de­vel­op­ment) stands as a way to not only cre­ate im­pact, but to em­brace the ir­ra­tional­ity of giv­ing and effec­tively ex­pand effec­tive al­tru­ism in larger cap­i­tal mar­kets.

Conclusion

Even if they come across as not tra­di­tion­ally al­igned with EA val­ues, there are lots of prob­lem ar­eas, namely global de­vel­op­ment, that could benefit from an in­crease in an­a­lyt­i­cal rigor.

Vice versa, EA could benefit from tap­ping into these larger cap­i­tal pools and po­ten­tially con­vert­ing them into higher im­pact brack­ets:

Cur­rently Open Phil. lists no cur­rent fo­cus ar­eas in global health and de­vel­op­ment. It only recom­mends in­di­vi­d­u­als make high im­pact dona­tions to high im­pact char­i­ties like those pur­su­ing de­worm­ing and anti-malaria.

I think that there is po­ten­tial for a prob­lem area to be loosely built around meta effec­tive­ness of the de­vel­op­ment sec­tor.

This isn’t a novel con­cept, and there is already a nascent move­ment in this sec­tor to­wards leaner op­er­at­ing strate­gies.

En­gag­ing with this space could not only re­veal fur­ther high im­pact prob­lems to work on, but also comes with nu­mer­ous strate­gic side benefits such as helping to re­frame nar­ra­tives that EAs aren’t in­ter­ested sys­temic change and that they ex­ist in an elitist bub­ble.


Edit 1: Changed ti­tle of post from “Do­ing Re­pairs vs. Buy­ing New” to “Benefits of EA en­gag­ing with main­stream (ad­dressed) cause ar­eas”

Note: This is my first post in the EA fo­rum. I at­tempted to the best of my abil­ity to re­search the points that were made here to make sure I wasn’t say­ing any­thing too re­dun­dant. Apolo­gies in ad­vance if this is the case.

I’m in­ter­ested in talk­ing with peo­ple here more about for­mal­iz­ing this is­sue. I’m also look­ing for some vol­un­teer pro­jects. I have a back­ground in de­sign, mar­ket­ing, strat­egy, and ex­pe­rience in the tech and philan­thropy/​foun­da­tion spaces. Please reach out if we can work with each other!

@bryan­lehrer www.bryan­lehrer.com