Risk-neutral donors should plan to make bets at the margin at least as well as giga-donors in expectation

[Dis­clo­sures and dis­claimers: I do work for the Fu­ture of Hu­man­ity In­sti­tute, and sig­nifi­cant con­sult­ing for Open Phil. I have pre­vi­ously worked at MIRI, and con­sulted for 80,000 Hours/​CEA. I have ad­vised the EA Giv­ing Group DAF re­gard­ing dona­tions, and sug­gested to Paul Chris­ti­ano that he fa­cil­i­tate the donor lot­tery dis­cussed in the post. I am writ­ing only for my­self, and not for any of the above or­ga­ni­za­tions.]


Risk-neu­tral small donors should aim to make bet­ter char­i­ta­ble bets at the mar­gin than giga-donors like the Open Philan­thropy Pro­ject (Open Phil) and Good Ven­tures us­ing donor lot­ter­ies, and can do at least as well as giga-donors by let­ting them­selves be funged. To do so, I recom­mend en­ter­ing Paul Chris­ti­ano’s donor lot­tery. If you win, do more re­search, in­clud­ing into the op­tion of en­ter­ing a larger lot­tery to reach op­ti­mal size, in be­tween small and giga-donors.


It’s giv­ing sea­son and nu­mer­ous sets of recom­men­da­tions for in­di­vi­d­ual donors are out: GiveWell top char­i­ties, GiveWell staff dona­tions, Cen­ter for Effec­tive Altru­ism (CEA) staff dona­tions, Open Philan­thropy (Open Phil) staff recom­men­da­tions for in­di­vi­d­ual donors, An­i­mal Char­ity Eval­u­a­tors (ACE) recom­men­da­tions, ACE staff per­sonal dona­tions, and an ad­di­tional post on the EA Fo­rum with sev­eral donors’ de­ci­sions and rea­son­ing (plus more in com­ments).

There are also op­por­tu­ni­ties that get pur­sued and funded by ma­jor fun­ders but not pitched to small donors in these lists. The Open Philan­thropy grant database is a rich source of ex­am­ples (these are not recom­men­da­tions for in­di­vi­d­ual donors). Th­ese op­por­tu­ni­ties may be costly to ex­plain to small donors, benefit from large min­i­mum grant sizes (startup funds for a new or­ga­ni­za­tion, or sta­ble fund­ing to hire for a new po­si­tion, etc), are funded as soon as they are iden­ti­fied, or just hap­pen to lack any char­ity eval­u­a­tor do­ing the work of eval­u­at­ing them for small donors.

Some peo­ple in the effec­tive al­tru­ist com­mu­nity have ar­gued that small donors should ac­cept that they will use marginal char­i­ta­ble dol­lars less effi­ciently than large ac­tors such as Open Phil, for lack of time, skill, and scale to find and choose be­tween char­i­ta­ble op­por­tu­ni­ties. Some­times this is phrased as ad­vice that small donors fol­low GiveWell’s recom­men­da­tions, while Open Phil pur­sues other causes and strate­gies such as sci­en­tific re­search and policy.

In con­trast, my view is that risk-neu­tral effec­tive al­tru­ist small donors should plan to make bets at least as good as the benefit of the marginal dol­lar for a giga-donor like Good Ven­tures with ac­cess to re­search re­sources like Open Phil. The ‘at least’ com­po­nent is easy to achieve: to match the perfor­mance of a giga-donor’s marginal dol­lar, one can sub­sti­tute one of your dol­lars for it in a grant that it would oth­er­wise make and email ask­ing it to ‘funge’ you by ac­cord­ingly re­duc­ing its grant. This is a great lower bound to have, as it takes ad­van­tage of the re­search ca­pac­ity and economies of scale of a so­phis­ti­cated donor work­ing hard to find the best op­por­tu­ni­ties. It is also a very challeng­ing bench­mark to beat: how could a small donor with limited re­search time or scale economies plan to make even bet­ter bets?

The availa­bil­ity of donor lot­ter­ies (see here for de­tails) means that small donors can con­vert their small dona­tion into a prob­a­bil­ity of be­ing a large donor. So whether the scale of best marginal donor (per dol­lar) would be small, medium, large, or enor­mous, the small donor can ac­cess that scale. There are both ad­van­tages and dis­ad­van­tages of larger scale, and the op­ti­mum dis­tri­bu­tion could lie in var­i­ous places. Would it be best to have a sin­gle ul­tra-scale donor mak­ing grants (in­clud­ing grants for re-grant­ing), 10 donors of 1/​10th the scale, or 2 donors with 25% each and 100 with 0.5%?

In this post, I ar­gue that while economies of scale fa­vor donors with sub­stan­tial bud­gets to en­able deeper re­search and sourc­ing of op­por­tu­ni­ties, dis­ec­onomies of scale fa­vor an in­ter­me­di­ate range well short of $10 billion at the cur­rent mar­gin, in the $100,000-$100,000,000 range. Small donors can ac­cess these op­por­tu­ni­ties through staged as­cent via donor lot­ter­ies to larger scales (e.g. a donor with a bud­get of $1,000 might first lot­tery up to $100k, then as­sess fur­ther lot­tery steps).

In out­line:

  1. Small donors can use donor lot­ter­ies to be­come larger donors, cap­tur­ing scale economies and mak­ing it worth­while to re­search key ques­tions about dona­tion con­di­tional on win­ning.

  2. If you be­lieve that any par­tic­u­lar large donor will make bet­ter dona­tion de­ci­sions than you in ex­pec­ta­tion (by your val­ues), then you can del­e­gate your dona­tion de­ci­sion, e.g. by donat­ing to an Open Phil grantee and ask­ing Open Phil to ‘funge’ you and re­duce its dona­tion by that amount.

  3. Select­ing a del­e­gate for dona­tion de­ci­sions, or com­par­ing del­e­ga­tion to an ob­ject-level op­por­tu­nity, is it­self a re­search prob­lem one can in­vest in, in­vest­ments that can be made cheaper us­ing donor lot­ter­ies.

  4. Assess­ing the value of a marginal dol­lar given to a giga-foun­da­tion de­pends on the value of the ‘last dol­lar,’ af­ter fur­ther re­search and up­dates, but also af­ter diminish­ing re­turns.

  5. Or­ga­ni­za­tion-wide risks and dis­trac­tions are more costly for larger donors, al­low­ing smaller donors to bet­ter har­vest op­por­tu­ni­ties with these down­sides.

  6. Fund­ing char­i­ta­ble op­por­tu­ni­ties en­tirely from a sin­gle source car­ries some down­sides. Mul­ti­ple in­ter­me­di­ate size donors can re­duce these.

  7. In­ter­me­di­ate scale donors may be able to spend more time per dol­lar al­lo­cated than smaller or larger donors.

  8. Differ­ences in val­ues or wor­ld­views may mean a small donor can’t find a large donor with fully al­igned aims to del­e­gate to, and may want to cre­ate one, or use a more al­igned del­e­gate of in­ter­me­di­ate size.

  9. In my view some medium-size donors have been able to take ad­van­tage of some of these fac­tors to out­perform the ‘last dol­lar’ of giga-donors.

  10. Ad­vice for small donors tran­si­tion­ing to medium and larger scales us­ing donor lot­ter­ies.

Donor lot­ter­ies make highly-in­formed dona­tion af­ford­able for small donors in expectation

As I dis­cussed in a re­cent post, donor lot­ter­ies al­low small donors to take ad­van­tage of economies of scale in dona­tion and re­search by buy­ing a small chance of al­lo­cat­ing a large dona­tion pool. E.g. in­stead of giv­ing $1,000, one could buy a 1 in 100 chance of al­lo­cat­ing $100,000, or a 1 in 1,000 chance of al­lo­cat­ing $1,000,000. If you ‘win’ the lot­tery then you can in­vest in bet­ter-re­searched dona­tion, but that in­vest­ment only needs to made in a minor­ity of cases. In the pre­vi­ous ex­am­ples this would cut the ex­pected cost of re­search per dol­lar al­lo­cated by over 99%.

There are many in­vest­ments of time and re­sources in im­prov­ing dona­tion qual­ity one could make af­ter a ‘win.’

  • Eval­u­at­ing the rest of the ar­gu­ments in this post

  • Read the work of char­ity eval­u­a­tors and ad­vi­sors like GiveWell or ACE in depth to eval­u­ate your trust in their recom­men­da­tions and spot-check or au­dit sam­pled claims

  • Eval­u­ate how idiosyn­cratic val­ues and pri­ori­ties in­ter­act with the recom­men­da­tions (e.g. putting your own val­ues and es­ti­mates into the GiveWell cost-effec­tive­ness spread­sheet)

  • Per­sonal dis­cus­sions (al­though shar­ing pub­lic notes can gen­er­ate ad­di­tional value) with staff from or­ga­ni­za­tions and re­spected in­de­pen­dent ad­vi­sors; this can be done in depth that would not be worth their time for a small donation

  • Deeply en­gag­ing with eth­i­cal ques­tions rele­vant to pri­ori­ti­za­tion, such as the treat­ment of non­hu­man an­i­mals and fu­ture gen­er­a­tions, or pop­u­la­tion ethics

  • Eval­u­at­ing whether to lot­tery up to larger amounts where diminish­ing re­turns might mat­ter more (lot­tery for $100,000 ini­tially, then eval­u­ate whether to go for $1,000,000 if you win the first lot­tery)

  • Con­sid­er­ing how much to donate now vs later (since the funds are in a DAF, they could com­pound there un­til dona­tion)

  • Tak­ing time off work for re­search (oth­er­wise unattrac­tive)

  • Seek­ing out dona­tion op­por­tu­ni­ties with large min­i­mum size

  • Put­ting out a call for pro­pos­als invit­ing or­ga­ni­za­tions and in­di­vi­d­u­als to pitch you

  • Hiring your own re­search staff, along the lines of Open Phil

  • Ne­go­ti­at­ing with other large donors re­gard­ing fung­ing and coordination

  • Spot-check­ing/​au­dit­ing the work of char­i­ties or char­ity recommenders

The post also has in­for­ma­tion about a donor lot­tery be­ing run by Paul Chris­ti­ano at my sug­ges­tion, with per­sonal par­ti­ci­pa­tion by a num­ber of effec­tive al­tru­ists, in­clud­ing GiveWell and Open Phil staff), and the de­tails of the pub­lic ran­dom draw.

In light of the availa­bil­ity of donor lot­ter­ies the rest of this post will be as­sum­ing that large dona­tion sizes and time in­vest­ments are ac­cessible for small donors (prob­a­bil­is­ti­cally).

If you be­lieve the ex­pected im­pacts of dona­tions by an­other donor are greater than your own, you can del­e­gate your donation

Sup­pose that you thought the Bill and Melinda Gates Foun­da­tion was the ‘smart money’ and bet­ter at giv­ing than you ac­cord­ing to your val­ues. In that case if you had no bet­ter al­ter­na­tive you could sim­ply donate to the Foun­da­tion. That’s what War­ren Buffett is do­ing, with a dona­tion worth over $30 billion. In prin­ci­ple one one could donate to the donor-ad­vised fund (DAF) of Open Phil, di­rectly in­creas­ing its ul­ti­mate dona­tion ca­pac­ity. At the mo­ment, this doesn’t seem to be set up, but one could in­stead donate to some­thing that Open Phil is donat­ing to (in­fra­marginal), and re­quest that it ‘funge’ you by re­duc­ing its own dona­tion to that char­ity by the cor­re­spond­ing amount, in­creas­ing the re­serves of Good Ven­tures and other Open Phil back­ers ac­cord­ingly. So the marginal Open Phil/​Good Ven­tures dol­lar sets a min­i­mum stan­dard for risk-neu­tral donors: if you don’t ex­pect to do bet­ter than Open Phil, just ar­range to get ‘funged’.

Like­wise, one can del­e­gate to other trusted medium and small donors. Holden Karnofsky dis­cusses this on the 2016 GiveWell staff per­sonal dona­tions page:

  • I thought about re­al­lo­cat­ing my giv­ing to an­other in­di­vi­d­ual, some­one who is quite value-al­igned with me and quite knowl­edge­able, and thinks differ­ently enough that they might see op­por­tu­ni­ties I don’t. As a gen­eral point, I think re­al­lo­cat­ing to oth­ers ad­dresses a similar is­sue to the donor lot­tery—try­ing to con­soli­date dona­tions so that a smaller num­ber of peo­ple can put in a greater amount of effort – and it seems to me that it is a bet­ter way of do­ing so when one has a per­son in mind they’re com­fortable re­al­lo­cat­ing to. (Of course, hy­brid ap­proaches are pos­si­ble too —one could re­al­lo­cate to a per­son who then plays the lot­tery, with the win­ner of the lot­tery con­sid­er­ing re­al­lo­ca­tion as well.)

I haven’t fi­nal­ized my de­ci­sion yet, but I am lean­ing to­ward the last op­tion. The “EA Giv­ing Group” DAF men­tioned by Nick is one pos­si­bil­ity, and there are oth­ers as well.

This op­tion means that risk-neu­tral effec­tive al­tru­ist donors try­ing to max­i­mize ex­pected im­pact with their dona­tion should take dona­tion del­e­gates as lower bounds for an ex­pected value ‘hur­dle rate.’ If one lacks much ev­i­dence for the qual­ity of a donor, this may be a fairly low bar: just like se­lect­ing ob­ject-level char­i­ties, eval­u­at­ing the knowl­edge, ca­pac­i­ties, mo­tives and con­straints of a pos­si­ble del­e­gate is a sig­nifi­cant re­search task. How­ever, there is sub­stan­tial ev­i­dence available about these things for a num­ber of pos­si­ble del­e­gates, and donor lot­ter­ies can be used to re­duce that re­search cost and iden­tify some strong donors.

Thus, a risk-neu­tral effec­tive­ness-ori­ented donor who donates to char­ity X im­plic­itly com­mu­ni­cates that they think it will use the marginal dol­lar bet­ter than the Gates Foun­da­tion, or Good Ven­tures and Open Phil, multi­billion dol­lar or­ga­ni­za­tions with large pro­fes­sional staffs (par­tic­u­larly the former) work­ing full-time to pick out the best grant op­por­tu­ni­ties.

Hav­ing the marginal Open Phil dol­lar as a lower bound hur­dle rate is great news: my ex­pec­ta­tion for the ‘last dol­lar’ of that port­fo­lio is ex­cep­tion­ally high rel­a­tive to the gen­eral world of char­ity. Among other things, I think even af­ter diminish­ing re­turns some com­bi­na­tion of sci­en­tific re­search (e.g. gene drives to erad­i­cate vec­tor-borne dis­eases), policy work (e.g. on for­eign aid or sci­ence policy), non­hu­man an­i­mals, global catas­trophic risks (po­ten­tial risks from AI, biose­cu­rity, nu­clear risk), and oth­ers put the ex­pected value of the ‘last dol­lar’ for Good Ven­tures higher than for GiveWell’s top char­i­ties.

So say­ing that char­ity X is a bet­ter bet than adding to Open Phil’s re­serves, or sav­ing in a donor-ad­vised fund to make use of fu­ture find­ings from Open Phil re­search, is a strong claim. Nonethe­less, de­spite full knowl­edge of this ar­gu­ment, in­clud­ing blog­ging about it and re­lated is­sues, I have made other recom­men­da­tions to effec­tive al­tru­ists seek­ing dona­tion ad­vice for a num­ber of years (albeit fre­quently men­tion­ing this con­sid­er­a­tion) which im­plied the claim about var­i­ous op­por­tu­ni­ties. Why did I be­lieve that?

Out­perform­ing a giga-donor means out­perform­ing the ex­pec­ta­tion of their ‘last dol­lar,’ af­ter in­creases in knowl­edge and diminish­ing returns

In a re­cent Open Phil post Holden wrote “we ex­pect to have over $100 mil­lion worth of grants for which the in­ves­ti­ga­tion is com­pleted this year [2016])” for OpenPhil, not in­clud­ing the $50 MM dona­tion to GiveWell top char­i­ties recom­mended to Good Ven­tures. While these are siz­able amounts, in light of Good Ven­tures’ re­sources (on the or­der of $10BB), it is sav­ing al­most all of its fi­nan­cial re­sources for bet­ter fu­ture op­por­tu­ni­ties, at the same time as it sup­ports, through Open Phil, re­search into bet­ter pri­ori­tiz­ing and iden­ti­fy­ing such op­por­tu­ni­ties.

In the de­bate on giv­ing now vs later, so far this re­flects a lean to­wards later. A 2015 GiveWell blog post dis­cussing recom­men­da­tions of grant timing to Good Ven­tures dis­cusses plans for giv­ing to rise as re­search into cause and char­ity se­lec­tion con­tinues and staff ca­pac­ity at Open Phil in­creases. I would ex­pect this pro­cess to tend to im­prove the qual­ity of fu­ture recom­men­da­tions and dona­tions, as true be­liefs will tend to be fa­vored by care­ful in­ves­ti­ga­tion.

How­ever, a mas­sive offset­ting con­trary con­sid­er­a­tion is diminish­ing re­turns. Dona­tions that make a large pro­por­tional differ­ence to fund­ing in a field, or seed a new field, can pluck ‘low-hang­ing fruit.’ [Also see Owen Cot­ton-Bar­ratt on diminish­ing re­turns.] Open Phil is large rel­a­tive to some of the philan­thropic fields it works in, such as fac­tory farm­ing or po­ten­tial risks from ar­tifi­cial in­tel­li­gence, but in global health it is small rel­a­tive to play­ers such as the Gates Foun­da­tion. As it builds up ca­pac­ity and grows small pri­or­ity fields, there will be much less room for on ma­jor pro­por­tional changes to fund­ing wa­ter­lines.

For illus­tra­tion, imag­ine log­a­r­ith­mic re­turns, where a pro­por­tional ex­pan­sion pro­vides the same util­ity gain re­gard­less of the pre­vi­ous size of the field. For some smaller fields ex­pan­sion by a fac­tor of 10-100x is pos­si­ble, which would then cor­re­spond to 90-99% re­duc­tion in marginal im­pact therein. Even if fur­ther re­search will pre­dictably sub­stan­tially im­prove dona­tion al­lo­ca­tion and re­duce un­cer­tainty, al­lo­cat­ing enough for some growth while fields are small can beat the last dol­lar in ex­pec­ta­tion (un­til the ri­val fac­tors are in bal­ance, which would still in­volve most spend­ing ly­ing in the fu­ture for donors in ag­gre­gate).

The chance to pluck time-sen­si­tive low-hang­ing fruit seems to me to be the main rea­sons for donors, large and small, to be giv­ing some now, rather than sav­ing (or giv­ing to a DAF to take ad­van­tage of tax benefits) to await im­proved re­search into op­por­tu­ni­ties and in­vest­ment re­turns. For small donors, get­ting to low-hang­ing fruit when ex­ist­ing large donors have not seems key to out­perfor­mance, but such op­por­tu­ni­ties de­pend on some bar­rier to en­try, an ex­pla­na­tion of why the large donors haven’t taken the op­por­tu­ni­ties.

Smaller donors have less to lose from or­ga­ni­za­tional sys­temic risk

A giga-donor on track to spend billions of dol­lars pluck­ing low-hang­ing fruit in many fields is a tremen­dously valuable as­set. If some grants trade off a fixed al­tru­is­tic benefit per dol­lar for some risk to the donor’s or­ga­ni­za­tion, where the risk scales with the im­por­tance of the or­ga­ni­za­tion, then they may have nega­tive value for large donors and pos­i­tive value for small donors.

Such risks could in­clude time-con­sum­ing com­pli­ca­tions that dis­tract se­nior man­age­ment mak­ing or­ga­ni­za­tion-wide de­ci­sions, con­tro­ver­sies that af­fect the or­ga­ni­za­tion’s rep­u­ta­tion, and im­pacts on staff morale or or­ga­ni­za­tional cul­ture, among other things. Other risks could in­volve changes with con­trary effects on work in differ­ent causes. For ex­am­ple, con­sider a foun­da­tion at­tempt­ing to sep­a­rately pro­mote crit­i­cal study of re­li­gious texts in ways that are seen to pro­mote athe­ism and si­mul­ta­neously to build re­la­tion­ships with offended re­li­gious lead­ers for co­op­er­a­tion on other policy is­sues.

A re­cent Open Phil blog post by Holden dis­cusses non-mon­e­tary costs of grants, in­clud­ing com­mu­ni­ca­tion costs, and the fact that grant de­ci­sions “re­flect to some de­gree on all of the 20+ peo­ple who work for the Open Philan­thropy Pro­ject.” Another post men­tions effects of di­verse grants with “sec­ondary benefits...spe­cific to a pub­lic-fac­ing or­ga­ni­za­tion with mul­ti­ple staff” in­clud­ing ones on morale and re­cruit­ing.

This pro­vides ra­tio­nal rea­sons for large donors to be more cau­tious and risk-averse in many of their ac­tivi­ties, in­clud­ing their com­mu­ni­ca­tion and grant­mak­ing strate­gies, but also op­por­tu­ni­ties for new en­trants to benefit from smaller size and hav­ing less to lose. It would still be cru­cial to con­sider sys­temic risk across move­ments and cause ar­eas they par­ti­ci­pate in, and could come at the ex­pense of some syn­er­gies of di­verse port­fo­lios, but would at least miti­gate or­ga­ni­za­tion-spe­cific risks.

As the num­ber of en­trants in­creased they could spe­cial­ize. Mul­ti­ple in­de­pen­dent spe­cial­ized en­trants would fur­ther re­duce these risks com­pared to a hy­po­thet­i­cal ‘Con­tro­ver­sial Philan­thropy Pro­ject.’

Similar con­sid­er­a­tions may also provide ex­tra rea­son to be cau­tious in hiring and staffing, as im­pacts on or­ga­ni­za­tional cul­ture be­come in­creas­ingly im­por­tant.

Prob­lems with sin­gle-donor funding

There are a num­ber of pos­si­ble prob­lems with be­ing the sole fun­der to a non­profit. If these ap­ply, it might be that an ad­di­tional dol­lar from a large donor has a differ­ent im­pact on a char­ity than a dol­lar from a small donor.

The most im­por­tant in­volve the com­pro­mise of in­de­pen­dence on the part of a char­ity, which af­fects both its ac­tual de­ci­sion-mak­ing, and the way it is per­ceived by oth­ers. For ex­am­ple, GiveWell has tried to limit dona­tions for its op­er­at­ing bud­get from Good Ven­tures (par­tic­u­larly for non-Open Phil work) to pre­serve its in­de­pen­dence, real and per­ceived, and similar con­sid­er­a­tions may ap­ply el­se­where. Ad­vo­cacy efforts may be seen as ‘as­tro­turf’ or just the voice of one fun­der, an­a­lysts may feel less free to pro­duce re­sults that are un­wel­come to the fun­der, sci­en­tists may op­ti­mize ex­ces­sively for im­press­ing the grant-maker, etc.

Fur­ther, an in­di­vi­d­ual grant­maker may have par­tic­u­lar bi­ases, re­la­tion­ships, and other fac­tors that can in­fluence giv­ing sep­a­rately from max­i­mum benefit. When a non­profit re­ceives funds from mul­ti­ple sources there is less pres­sure to op­ti­mize for such bi­ases (since they can can­cel out across fun­ders).

A grantee that be­comes de­pen­dent on a sin­gle large fun­der makes it much more difficult for the large fun­der to with­draw: the or­ga­ni­za­tion could col­lapse, cre­at­ing sig­nifi­cant harm for its staff, un­wel­come me­dia at­ten­tion, and nega­tive feel­ings and feed­back in the grant­maker (who may have de­vel­oped re­la­tion­ships with the grantee). This can harm the grant­maker’s rep­u­ta­tion and make oth­ers less will­ing to deal with it or de­pend on its fund­ing.

There are also some le­gal con­sid­er­a­tions, which are gen­er­ally of lesser im­por­tance. In the United States non­prof­its that qual­ify for ‘pub­lic char­ity’ sta­tus en­joy fa­vor­able reg­u­la­tory treat­ment, e.g. avoid­ing a 2% tax on in­vest­ment in­come, limits on self-deal­ing and busi­ness hold­ings, pro­hi­bi­tions on lob­by­ing, some­what worse tax-de­ductibil­ity, abil­ity to re­ceive dona­tions from DAFs, etc. One re­quire­ment is that a non­profit pass a pub­lic sup­port test, show­ing that it re­ceives at least a third of its dona­tions from other pub­lic char­i­ties (in­clud­ing DAFs), gov­ern­ment, and the gen­eral pub­lic (a given donor can only ac­count for a max­i­mum of 2% of this sup­port), or 10% plus ad­di­tional sup­port­ing facts and cir­cum­stances.

If pub­lic char­ity sta­tus is im­por­tant for a char­ity, and it is close to the cut­off, then this would be a rea­son why there could be ad­di­tional valuable dona­tion op­por­tu­ni­ties a large donor could not claim. How­ever, it is not clear to me that this sta­tus is es­sen­tial in many cases, par­tic­u­larly if the al­ter­na­tive was much greater ex­pan­sion, and it of­ten does not ap­ply (e.g. for dona­tions to sup­port a pro­gram within a large or­ga­ni­za­tion like a uni­ver­sity). If it is an is­sue, this would be a very log­i­cal oc­ca­sion for a le­gi­t­i­mate dona­tion match­ing challenge where the large donor matches donors be­low the 2% thresh­old.

The prob­lems of sin­gle-donor-dom­i­nance are again limi­ta­tions that could give a per-dol­lar-donated ad­van­tage to new en­trants, even if they had equal skills and iden­ti­cal views.

Spend­ing more time in­ves­ti­gat­ing per dol­lar allocated

Ma­jor foun­da­tions com­monly grant mil­lions of dol­lars per year per em­ployee and that is cur­rently the case for Open Phil. The Wikipe­dia page for Open Phil shows ~$10 MM in each of farm an­i­mal welfare and crim­i­nal jus­tice re­form since pro­gram officers Chloe Cock­burn and Lewis Bol­lard were hired in mid and late 2015. Th­ese figures do not in­clude any grants that have been made but not yet been pub­lished to the Open Phil grants database and copied to Wikipe­dia, but in any case amount to al­lo­ca­tions of thou­sands of dol­lars of dona­tions per pro­gram officer hour.

Con­sid­er­ing the ~$60 MM in pub­lished grants and the en­tire FTE staff across all roles funds al­lo­cated still ap­pear well over $1,000 per hour. More­over, much staff and man­age­ment time has gone into ca­pac­ity-build­ing, e.g. hiring much of the cur­rent team. Open Phil has writ­ten that it would like a short-run bud­get closer to 5% of available cap­i­tal, i.e. over $400 MM an­nu­ally, even be­fore reach­ing ‘peak ca­pac­ity’ to eval­u­ate op­por­tu­ni­ties.

Th­ese figures sug­gest a new donor could in­vest more hours into in­ves­ti­gat­ing dona­tion op­por­tu­ni­ties per dol­lar donated than Open Phil is, or add more dona­tions with the same ra­tio. How­ever, that raises ques­tions about why any in­puts by the new donor aren’t be­ing used by ex­ist­ing well-funded donors:

  • If a new donor hires re­search staff and pro­gram officers, why won’t ex­ist­ing foun­da­tions hire them? Are they less effec­tive, or di­vert­ing tal­ent from similar ac­tivi­ties el­se­where?

  • If the donor does in­ves­ti­ga­tion to guide their own dona­tions and/​or pub­lish the re­sults, why can’t she sim­ply do the same re­search and let ex­ist­ing play­ers act on it?

  • Would this time be bet­ter put to­wards ex­e­cut­ing di­rect pro­jects, let­ting oth­ers fund them?

How­ever, there are clearly cases where ad­di­tional or­ga­ni­za­tions can re­lieve the difficulty of hiring in ways that are pos­i­tive sum, e.g.:

  • Hiring staff, or grant­ing to au­tonomous grant pools, car­ries or­ga­ni­za­tion-wide risks, as dis­cussed above

  • Hiring staff to live in differ­ent cities, or­ga­ni­za­tional cul­tures, etc

  • Reliev­ing bot­tle­necks on the time of top man­age­ment and fun­ders in ap­prov­ing and su­per­vis­ing hires (see this GiveWell post ex­plain­ing its man­age­ment time con­straints on hiring)

  • A donor lot­tery win­ner may be more con­fi­dent of al­ign­ment with their own aims

The find­ings of ad­di­tional re­search can be pub­lished and con­tributed to the com­mon stock of knowl­edge, in­clud­ing for use by giga-donors.

Scale economies for differ­ent val­ues or worldviews

Most of the con­sid­er­a­tions above have been gen­eral struc­tural ones about op­ti­mal scale and con­cen­tra­tion for donors. But his­tor­i­cally I think a lot of the out­perfor­mance by small donors that I have seen has re­flected differ­ences in what a re­cent Open Phil blog post calls ‘wor­ld­views.’

For ex­am­ple, in ad­di­tion to GiveWell’s ex­plicit crite­ria, its pub­lished analy­ses of char­i­ties out­side the Open Philan­thropy Pro­ject do not dis­cuss effects on non­hu­man an­i­mals or fu­ture gen­er­a­tions, and its eval­u­a­tions de­pend on par­tic­u­lar judg­ments about pop­u­la­tion ethics. Char­i­ties fo­cused on non­hu­man an­i­mals to­day offer the po­ten­tial to af­fect or­ders of mag­ni­tude more life-years of crea­tures such as chick­ens than GiveWell recom­men­da­tions af­fect hu­man life-years. This dis­pro­por­tion is sub­stan­tially larger than the dis­pro­por­tion in the scale of hu­man and chicken ner­vous sys­tems (see this post for some dis­cus­sion), and ex­pert opinion on av­er­age fa­vors chicken con­scious­ness and the moral sig­nifi­cance of an­i­mal welfare. The dis­pro­por­tion be­tween the size of the cur­rent pop­u­la­tion and fu­ture gen­er­a­tions is even greater. The Open Phil post dis­cusses some num­bers:

For a rel­a­tively clear ex­am­ple, con­sider GiveWell’s top char­i­ties vs. our work so far on farm an­i­mal welfare:

  • GiveWell es­ti­mates that its top char­ity (Against Malaria Foun­da­tion) can pre­vent the loss of one year of life for ev­ery $100 or so.

  • We’ve es­ti­mated that cor­po­rate cam­paigns can spare over 200 hens from cage con­fine­ment for each dol­lar spent. If we roughly imag­ine that each hen gains two years of 25%-im­proved life, this is equiv­a­lent to one hen-life-year for ev­ery $0.01 spent.

  • If you value chicken life-years equally to hu­man life-years, this im­plies that cor­po­rate cam­paigns do about 10,000x as much good per dol­lar as top char­i­ties. If you be­lieve that chick­ens do not suffer in a morally rele­vant way, this im­plies that cor­po­rate cam­paigns do no good.3

  • One could, of course, value chick­ens while valu­ing hu­mans more. If one val­ues hu­mans 10-100x as much, this still im­plies that cor­po­rate cam­paigns are a far bet­ter use of funds (100-1,000x). If one val­ues hu­mans as­tro­nom­i­cally more, this still im­plies that top char­i­ties are a far bet­ter use of funds. It seems un­likely that the ra­tio would be in the pre­cise, nar­row range needed for these two uses of funds to have similar cost-effectiveness

I think similar con­sid­er­a­tions broadly ap­ply to other com­par­i­sons, such as re­duc­ing global catas­trophic risks vs. im­prov­ing policy, though quan­tify­ing such causes is much more fraught.

The weight one puts on differ­ent wor­ld­views is a nat­u­ral ques­tion one could re­search to guide dona­tions and try to im­prove on Open Phil’s recom­men­da­tions, if only in bet­ter re­flect­ing one’s idiosyn­cratic prefer­ences. If one puts rel­a­tively more weight on non­hu­man an­i­mals or fu­ture gen­er­a­tions, or treats un­cer­tainty differ­ently, then this could im­ply a some­what lower thresh­old for dona­tion than Open Phil’s di­ver­sifi­ca­tion would when, e.g. con­sid­er­ing fac­tory farm­ing or global catas­trophic risk in­ter­ven­tions.

In the con­text of a donor lot­tery, deeper in­ves­ti­ga­tion into wor­ld­view-re­lated ques­tions is one way to im­prove de­ci­sion qual­ity, or to bet­ter re­flect idiosyn­cratic val­ues. For both, but es­pe­cially the former case, I would recom­mend cau­tious re­flec­tion on the re­li­a­bil­ity of one’s own in­tu­itions, and the ev­i­dence and re­flec­tion in­form­ing var­i­ous views.

Have medium-size donors in effec­tive al­tru­ism been able to add value to large donors in the past?

The above con­di­tions are fairly gen­eral and ab­stract. But, more con­cretely, in past years I have recom­mended dona­tion tar­gets other than con­tribut­ing to the Open Phil grant pool to peo­ple ask­ing my ad­vice, gen­er­ally in the ar­eas of re­duc­ing po­ten­tial ex­is­ten­tial risk from fu­ture de­vel­op­ments in ar­tifi­cial in­tel­li­gence, and de­vel­op­ing in­sti­tu­tions in effec­tive al­tru­ism. Th­ese recom­men­da­tions were for ar­eas where sev­eral of the above fac­tors ap­plied: or­ga­ni­za­tional risks, rep­u­ta­tional/​com­mu­ni­ca­tion prob­lems, staff bot­tle­necks, and in­ter­ac­tions with broader wor­ld­views. In sub­se­quent years OpenPhil did en­ter the ar­eas, but the early grants were able to fund time-sen­si­tive op­por­tu­ni­ties such as seed and growth fund­ing.

This fo­cus area, and many of the recom­men­da­tions and eval­u­a­tions have had ex­ten­sive over­lap with those of the ‘EA Giv­ing Group’ DAF men­tioned by Nick Beck­stead on the 2016 GiveWell per­sonal dona­tions page (and I have fre­quently dis­cussed char­ity op­por­tu­ni­ties with Nick):

Nick Beckstead

This year I am donat­ing to the “EA Giv­ing Group” DAF (donor-ad­vised fund). Since 2012, one of my side pro­jects has been work­ing with a pri­vate in­di­vi­d­ual (who has pro­vided the vast ma­jor­ity of the funds and prefers to re­main anony­mous) to make dona­tions to or­ga­ni­za­tions work­ing in the effec­tive al­tru­ism space and or­ga­ni­za­tions work­ing on miti­gat­ing global catas­trophic risks (es­pe­cially po­ten­tial risks from ad­vanced AI). We meet ev­ery three weeks to dis­cuss po­ten­tial dona­tion op­por­tu­ni­ties and make de­ci­sions, and we both keep up with ac­tivi­ties in the space through re­la­tion­ships we’ve built up over time. The DAF is jointly con­trol­led by me and this part­ner.

A list of dona­tions we’ve made in the past (with­out dol­lar amounts) is available here (ar­ranged by year and de­creas­ing or­der of grant size). The or­ga­ni­za­tions that re­ceived the most fund­ing were the Cen­tre for Effec­tive Altru­ism (CEA), the Fu­ture of Life In­sti­tute, 80,000 Hours (part of CEA), and Founders Pledge. I think these grants have gone well over­all, as has our sup­port for Char­ity En­trepreneur­ship and the Cam­bridge Cen­tre for the Study of Ex­is­ten­tial Risk. In most cases, we sup­ported these or­ga­ni­za­tions rel­a­tively early in their ex­is­tence, and we’ve mainly sup­ported them when they were new or rel­a­tively young.

Over the last year, Open Phil has also made grants in these ar­eas based on my recom­men­da­tions. I an­ti­ci­pate that there will be some cases where a grant would be a good fit for this DAF but not Open Phil. How­ever, with Open Phil as a fun­der in this space it has been harder to find op­por­tu­ni­ties that are as promis­ing and ne­glected as we were able to find pre­vi­ously.

I don’t yet know what this DAF will sup­port in the com­ing year, but it will prob­a­bly have a similar fla­vor to what was sup­ported in the past.

I am mak­ing this dona­tion in­stead of a dona­tion to GiveWell’s top char­i­ties pri­mar­ily be­cause (i) I think this is more op­ti­mized for in­fluenc­ing long-term out­comes for the world (which is my pri­mary al­tru­is­tic ob­jec­tive—rea­son­ing here) and sec­on­dar­ily be­cause (ii) I think we have a good chance of get­ting a “mul­ti­plier effect” where sup­port of the effec­tive al­tru­ist com­mu­nity even­tu­ally re­sults in more to­tal dona­tions to GiveWell’s top char­i­ties and other things I find com­pa­rably good.

If you want to make a con­tri­bu­tion to this DAF, then fill out this form.

This might be a good fit for peo­ple who have some com­bi­na­tion of the fol­low­ing prop­er­ties: in­ter­est in effec­tive al­tru­ism and/​or global catas­trophic risks, con­text needed to as­sess our (still early) track record, trust in my judg­ment and/​or my part­ner’s judg­ment, limited time/​con­text available to make dona­tion de­ci­sions them­selves. We up­date con­trib­u­tors on grants made a cou­ple of times per year.

This DAF was able to en­ter these ar­eas years be­fore GiveWell or Open Phil in part be­cause of con­sid­er­a­tions along the lines dis­cussed in ear­lier sec­tions. A 2015 Holden post at the Effec­tive Altru­ism Fo­rum ex­plained rea­sons why Open Phil was not at that time fund­ing or­ga­ni­za­tions in the effec­tive al­tru­ism com­mu­nity in­clud­ing:

  • High staff time re­quire­ments for in­ves­ti­gat­ing and mak­ing grants out­side of a fo­cus area, com­pet­ing against other uses of man­age­ment and staff time (like hiring more pro­gram officers)

  • Com­pli­ca­tions in­volv­ing the in­de­pen­dence of or­ga­ni­za­tions (large sin­gle fun­der is­sues)

  • Or­ga­ni­za­tional risks from difficulty com­mu­ni­cat­ing Open Phil’s ar­eas of agree­ment and dis­agree­ment with effec­tive al­tru­ism and mem­bers of that community

  • Other donors in the EA com­mu­nity were highly fa­mil­iar with the or­ga­ni­za­tions in ques­tion and able to donate with fewer of the above costs

  • The op­por­tu­ni­ties were not con­sid­ered so out­stand­ing as to out­weigh the rest of the factors

Holden’s post also noted these non-mon­e­tary costs might de­cline (or benefits rise), and their de­ci­sion might change in the fu­ture, and since then Open Phil has started to in­ves­ti­gate and make grants in the area of effec­tive al­tru­ism.

In the area of po­ten­tial risks from ad­vanced ar­tifi­cial in­tel­li­gence, non-mon­e­tary and in­ves­ti­ga­tion costs were high for an area that was rel­a­tively un­usual, con­tro­ver­sial (po­ten­tially with var­i­ous or­ga­ni­za­tional costs and risks) and hard to eval­u­ate. This cause was more im­por­tant in wor­ld­views that put more weight on long-run out­comes (and strate­gies to af­fect them at a large scale rather than through lo­cal lin­ear effects), which I placed more cre­dence in (af­ter long thought). It was also an area that I had spent a lot of time in­ves­ti­gat­ing from many an­gles, and that took a lot of time to fully com­mu­ni­cate the ev­i­dence base be­hind.

Open Phil has taken on po­ten­tial risks from ad­vanced ar­tifi­cial in­tel­li­gence as a ma­jor fo­cus area, and some staff have up­dated some of the heuris­tics that af­fected this op­por­tu­nity. So in both these ar­eas fu­ture op­por­tu­ni­ties may less of­ten look like fund­ing what Open Phil can’t, and more of­ten amount to differ­ent al­lo­ca­tions or amounts of fund­ing. Nonethe­less, fo­cus­ing on differ­ences be­tween the situ­a­tions of smaller donors and Open Phil may also help iden­tify new op­por­tu­ni­ties (or ones that are more costly for large donors) here, and similar gains may be had in other do­mains.

Another case where small and medium size effec­tive al­tru­ist donors got to a cause area to pluck some low-hang­ing fruit prior to Open Phil sub­se­quently be­com­ing in­volved was in the case of non­hu­man an­i­mals, both farmed an­i­mals and wild an­i­mals.

In the area of global poverty Thomas Mather con­tributed to work on us­ing gene drives to elimi­nate schis­to­so­mi­a­sis, also com­mis­sion­ing a re­search re­port on the topic by the Philan­thropy Ad­vi­sory Fel­low­ship (stem­ming from Har­vard Effec­tive Altru­ism). In this case the funds pre­ceded Open Phil’s grant­mak­ing in gene drives by a shorter time, and were much less in­de­pen­dent, but still to some ex­tent may have ex­pe­d­ited sup­port for the area. This is prob­a­bly less of a suc­cess than some of the other can­di­dates above, but is the sort of thing that small and medium donors might try in or­der to out­perform giga-donors.

The fact that Open Phil sub­se­quently funded an area does not mean that ear­lier EA donors were out­perform­ing the ‘last dol­lar,’ since the last dol­lar will be in­formed by much more in­for­ma­tion, but be­cause of low-hang­ing fruit, it is plau­si­ble that this sort of early ar­rival to a cause area pays off for a port­fo­lio of such cases.

Ad­vice for donor lot­tery winners

Sup­pose that you par­ti­ci­pate and win Paul Chris­ti­ano’s donor lot­tery and are in a po­si­tion to recom­mend an al­lo­ca­tion of $100,000 in char­i­ta­ble dona­tions af­ter Jan­uary 15th: what would I ad­vise to take ad­van­tage of scale economies and dis­ec­onomies?

One is­sue to con­sider is that while one could then at­tempt a fur­ther round of dona­tion lot­ter­ies, e.g. to get a 10% chance of al­lo­cat­ing $1,000,000, or a 1% chance of al­lo­cat­ing $10,000,000, if there are diminish­ing re­turns over mod­er­ate amounts, low-hang­ing fruit might be missed (in ex­pec­ta­tion).

For ex­am­ple, if the most at­trac­tive op­por­tu­nity (tak­ing into ac­count the limi­ta­tions on other donors, is­sues with sin­gle donors pro­vid­ing too much fund­ing to an or­ga­ni­za­tion, etc) has a bud­get of only a few hun­dred thou­sand, then a 10% chance of $1,000,000 might do sig­nifi­cantly less good at the mar­gin. A donor who was large rel­a­tive to an or­ga­ni­za­tion’s fund­ing drop­ping out be­cause of a lot­tery loss could cause a fluc­tu­a­tion that could com­pli­cate plan­ning.

For donors scal­ing up to $100,000 such risks are quite man­age­able, and I think out­weighed by the gains of de­ci­sion qual­ity and op­tions ver­sus small dona­tions, but I flag this as a con­cern for that level. Fol­low­ing such a lot­tery win and as­sess­ment of op­tions (in­clud­ing in­ves­ti­gat­ing wor­ld­view ques­tions, pos­si­ble donors to del­e­gate to, diminish­ing re­turns in promis­ing ar­eas and or­ga­ni­za­tions) one can as­sess this is­sue for the next stage. If the pos­si­bil­ity of dis­rup­tion of fund­ing to an area by chance is a big con­cern, one can con­duct a donor lot­tery with other (cur­rent) sup­port­ers of the same cause or or­ga­ni­za­tion, so that one can be con­fi­dent the as­sess­ment with the con­cen­trated dona­tion pool will be made by some­one within the start­ing group.

At a larger scale, e.g. $1,000,000, $10,000,000 or more, it may make sense to ‘in­sti­tu­tion­al­ize’ to a greater ex­tent, form­ing a team to do re­search, donat­ing in sup­port of an­swer­ing spe­cific re­search ques­tions for in­for­ma­tion value, and similar. One op­tion would be to pool with other medium-size donors or groups like the EA Giv­ing Group DAF, but I sus­pect that cre­at­ing more than one such group (if one can find peo­ple with the nec­es­sary skills who want to par­ti­ci­pate) could add value by in­cor­po­rat­ing more per­spec­tives and man­ag­ing some of the dis­ec­onomies of scale dis­cussed ear­lier.


The value
is not of type