Introducing the EA Funds

Up­date: The EA Funds has launched!

This post in­tro­duces a new pro­ject that CEA is work­ing on, which we’re call­ing the Effec­tive Altru­ism Funds.

Some de­tails about this idea are be­low. We’d re­ally ap­pre­ci­ate com­mu­nity feed­back about whether this is the kind of thing they’d like to see CEA work­ing on. We’ve also been get­ting in­put from our men­tors at Y Com­bi­na­tor, who are ex­cited about this idea.

The Idea

EAs care a lot about donat­ing effec­tively, but, donat­ing effec­tively is hard, even for en­gaged EAs. The eas­iest op­tions are GiveWell-recom­mended char­i­ties, but many peo­ple be­lieve that other char­i­ties offer an even bet­ter op­por­tu­nity to have an im­pact. The al­ter­na­tive, for them, is to figure out: 1) which cause is most im­por­tant; 2) which in­ter­ven­tions in the cause are most effec­tive; and 3) which char­i­ties ex­e­cut­ing those in­ter­ven­tions are most effec­tive yet still have a fund­ing gap.

Re­cently, we’ve seen de­mand for op­tions that al­low in­di­vi­d­u­als to donate effec­tively while re­duc­ing their to­tal work­load, whether by defer­ring their de­ci­sion to a trusted ex­pert (Nick Beck­stead’s EA Giv­ing Group) or ran­domis­ing who al­lo­cates a group’s to­tal dona­tions (Carl Shul­man and Paul Chris­ti­ano’s dona­tion lot­tery). We want to meet this de­mand and help EAs give more effec­tively at lower time cost. We hope this will al­low the com­mu­nity to take ad­van­tage of the gains of la­bor spe­cial­iza­tion, re­ward­ing a few EAs for con­duct­ing in-depth dona­tion re­search while al­low­ing oth­ers to spe­cial­ize in other im­por­tant do­mains.

The Structure

Via the EA Funds, peo­ple will be able to donate to one or more funds with par­tic­u­lar fo­cus ar­eas. Donors will be able to al­lo­cate their dona­tions to one or more of CEA’s EA Funds. Dona­tions will be dis­bursed based on the recom­men­da­tions of fund man­agers. If peo­ple don’t know what cause or causes they want to fo­cus on, we’ll have a tool that asks them a few ques­tions about key judge­ment calls, then makes a recom­men­da­tion, as well as more in-depth ma­te­ri­als for those who want to deep-dive. Once peo­ple have made their cause choices, fund man­agers use their up-to-date knowl­edge of char­i­ties’ work to do char­ity se­lec­tion.

We want to keep this idea as sim­ple as pos­si­ble to be­gin with, so we’ll have just four funds, with the fol­low­ing man­agers:

  • Global Health and Devel­op­ment—Elie Hassenfeld

  • An­i­mal Welfare – Lewis Bollard

  • Long-run fu­ture – Nick Beckstead

  • Move­ment-build­ing – Nick Beckstead

(Note that the meta-char­ity fund will be able to fund CEA; and note that Nick Beck­stead is a Trus­tee of CEA. The long-run fu­ture fund and the meta-char­ity fund con­tinue the work that Nick has been do­ing run­ning the EA Giv­ing Fund.)

It’s not a co­in­ci­dence that all the fund man­agers work for GiveWell or Open Philan­thropy. First, these are the or­gani­sa­tions whose char­ity eval­u­a­tion we re­spect the most. The worst-case sce­nario, where your dona­tion just adds to the Open Philan­thropy fund­ing within a par­tic­u­lar area, is there­fore still a great out­come. Se­cond, they have the best in­for­ma­tion available about what grants Open Philan­thropy are plan­ning to make, so have a good un­der­stand­ing of where the re­main­ing fund­ing gaps are, in case they feel they can use the money in the EA Fund to fill a gap that they feel is im­por­tant, but isn’t cur­rently ad­dressed by Open Philan­thropy.

The Vision

One vi­sion I have for the effec­tive al­tru­ism com­mu­nity is that its mem­bers can func­tion like a peo­ple’s foun­da­tion: any in­di­vi­d­ual donor on their own might not have that much power, but if the com­mu­nity acts to­gether they can have the sort of in­fluence that ma­jor foun­da­tions like the Gates Foun­da­tion have. The EA funds help move us to­ward that vi­sion.

In the first in­stance, we’re just go­ing to have four funds, to see how much de­mand there is. But we can imag­ine var­i­ous ways in which this idea could grow.

If the ini­tial ex­per­i­ment goes well, then in the longer run, we’d prob­a­bly host a wider va­ri­ety of funds. For ex­am­ple, we’re in dis­cus­sion with Carl and Paul about run­ning the Donor Lot­tery fund, which we think was a great in­no­va­tion from the com­mu­nity. Ul­ti­mately, it could even be that any­one in the EA com­mu­nity can run a fund, and there’s com­pe­ti­tion be­tween fund man­agers where who­ever makes the best grants gets more fund­ing. This would over­come a down­side of us­ing GiveWell and Open Philan­thropy staff mem­bers as fund man­agers, which is that we po­ten­tially lose out on benefits from a larger va­ri­ety of per­spec­tives.

Hav­ing a much wider va­ri­ety of pos­si­ble char­i­ties also could al­low us to make donat­ing has­sle-free for effec­tive al­tru­ism com­mu­nity mem­bers. Rather than ev­ery mem­ber of the effec­tive al­tru­ism com­mu­nity mak­ing in­di­vi­d­ual con­tri­bu­tions to mul­ti­ple char­i­ties, hav­ing to figure out them­selves how to do so as tax effi­ciently as pos­si­ble, in­stead they could set up a di­rect debit to con­tribute through this plat­form, sim­ply write in how much they want to con­tribute to which char­i­ties, and we could take care of the rest. And, with re­spect to tax effi­ciency, we’ve already found that even pro­fes­sional ac­coun­tants of­ten mis­ad­vise donors with re­spect to the size of the tax re­lief they can get. At least at the out­set, only US and UK donors will be el­i­gible for tax benefits when donat­ing through the fund.

Fi­nally, we could po­ten­tially use this plat­form to ad­minister moral trades be­tween donors. At the mo­ment, peo­ple just give to wher­ever they think is best. But this loses out on the po­ten­tial for a com­mu­nity to have more im­pact, by ev­ery­one’s lights, than they could have oth­er­wise.

For ex­am­ple, imag­ine that Alice and Bob both want to give $100 to char­ity, and see this dona­tion as pro­duc­ing the fol­low­ing amounts of value rel­a­tive to one an­other. (E.g. where Alice be­lieves that a $100 dona­tion to AMF pro­duces 1 QALY)












This means that if Alice and Bob were each to give to the char­i­ties that they think are the most effec­tive, (AMF and SCI, re­spec­tively), they would eval­u­ate the to­tal value as be­ing:

1 QALY (from their dona­tion) + 0.5 QALYs (from the other per­son’s dona­tion)

= 1.5 QALYs

But if they paired their dona­tions, they would eval­u­ate the to­tal value as be­ing:

0.8 (from their dona­tion) + 0.8 (from the other per­son’s dona­tion)

= 1.6 QALYs

The same idea could hap­pen with re­spect to the timing of dona­tions, too, if one party prefers to donate ear­lier, and an­other prefers to in­vest and donate later.

We’re still ex­plor­ing the EA Funds idea, so we wel­come sug­ges­tions and feed­back in the com­ments be­low.