Effective Altruism Grants project update

The Cen­tre for Effec­tive Altru­ism has dis­tributed its first round of grants through its new Effec­tive Altru­ism Grants pro­gram. The aim of the pro­ject is to solve fund­ing con­straints for high-im­pact pro­jects. You can read more about our mo­ti­va­tion and aims.

This post de­tails (1) the grants we’ve made, (2) our as­sump­tions, (3) the grant method­ol­ogy, (4) cost-benefit con­sid­er­a­tions, (5) mis­takes, (6) difficul­ties, (7) pro­ject changes, and (8) our plans for EA Grants go­ing for­ward.

Grants

We are shar­ing in­for­ma­tion about our grant this round to give peo­ple a bet­ter sense of what kinds of pro­jects we look for, should we run EA Grants rounds in the fu­ture. You can see the grants we made.

We have al­lo­cated £369,924 for dis­tri­bu­tion, with­hold­ing the re­main­der of the al­lot­ted £500,000 to fur­ther fund some of the cur­rent re­cip­i­ents, con­tin­gent on perfor­mance.

We also fa­cil­i­tated the fund­ing of grants by the Open Philan­thropy Pro­ject and a cou­ple of pri­vate donors.

Assumptions

There were many im­plicit as­sump­tions we made in de­cid­ing that and how to run EA Grants. A few of the ma­jor ones in­clude:

Many good pro­jects are ham­strung by small fund­ing gaps.

We be­lieve some high-value pro­jects have un­met fund­ing needs. The in­di­vi­d­u­als and small or­ga­ni­za­tions we de­cided to fund are gen­er­ally too small to get on the radar of foun­da­tions like the Open Philan­thropy Pro­ject, and small donors rarely have time or ex­per­tise to eval­u­ate many small pro­jects. But there are high re­turns to fund­ing them.

Value al­ign­ment is use­ful for main­tain­ing pro­ject rele­vance.

In or­der to be com­fortable with this ar­range­ment, we placed par­tic­u­lar em­pha­sis on eval­u­at­ing value al­ign­ment, al­tru­is­tic mo­ti­va­tion, and judg­ment. Value al­ign­ment was par­tic­u­larly im­por­tant, even more than os­ten­si­bly well-defined pro­jects, since some au­ton­omy is in­evitable. All else equal we preferred pro­jects by ap­pli­cants who have a track record of do­ing this or other pro­jects well and on a vol­un­tary or self­lessly mo­ti­vated ba­sis. (One ex­cep­tion to this rule is that we must stipu­late that fund­ing is not used for cer­tain ac­tivi­ties that don’t fit within our char­i­ta­ble ob­jects.)

At this fund­ing level, a hefty ap­pli­ca­tion pro­cess would be more costly than use­ful.

Many grant­mak­ing pro­cesses re­quire multi-page pro­pos­als. Since our grants were both smaller and more spec­u­la­tive than many of the grants foun­da­tions dis­tribute, ap­pli­ca­tions of that length felt un­nec­es­sar­ily costly, both for the ap­pli­cants and for us as eval­u­a­tors. This had costs: pro­jects that are hard to de­scribe briefly suffer from in­suffi­cient space to make their cases. We tried to get the best of both wor­lds by re­quest­ing ad­di­tional in­for­ma­tion where we found them hard to as­sess with just what we had. We leave open the pos­si­bil­ity of longer pro­pos­als in the fu­ture should we run sub­se­quent rounds.

Grant methodology

The grants ap­pli­ca­tion pro­cess had three rounds, and is best de­scribed as a pro­cess-based ap­proach.

First round

In the first round the three grants as­so­ci­ates elimi­nated ap­pli­ca­tions which we could clearly as­sess would not meet our se­lec­tion crite­ria. We re­ceived 722 ap­pli­ca­tions and desk re­jected 413 of them, about 57% of the ap­pli­cants.

Se­cond round

The sec­ond round in­volved tak­ing the re­main­ing ap­pli­ca­tions and as­sess­ing ap­pli­cants based on their track record, val­ues, and plans. This as­sess­ment ad­hered to a rubric, weight­ing each cat­e­gory in ac­cor­dance with its pre­dic­tive power for pro­ject suc­cess. The scores com­bined into one weighted score per ap­pli­cant, which we used to rank the re­main­ing ap­pli­cants.

We then went through the list by rank and chose ap­pli­cants to in­ter­view, dis­cussing ap­pli­cants about which there was large di­ver­gence in scores or gen­eral opinion. Given our £500,000 bud­get and most of three staff mem­bers’ time for two weeks, we de­cided to in­ter­view 63 can­di­dates.

Third round

Most can­di­dates had three, 10-minute in­ter­views, which we used to fur­ther as­sess their achieve­ments, val­ues, and plans. Can­di­dates we knew well re­ceived only one in­ter­view. For can­di­dates with skil­lsets we couldn’t eval­u­ate in­ter­nally we ar­ranged a fourth in­ter­view with a rele­vant tech­ni­cal ex­pert. We then used the data from these in­ter­views, as well as any ad­di­tional in­for­ma­tion re­quested from refer­ences and/​or the ap­pli­cants them­selves, to ad­just their writ­ten ap­pli­ca­tion scores. While each in­ter­viewer could make mod­ifi­ca­tions to scores in all three cat­e­gories, in­ter­view­ers each had a cat­e­gory of fo­cus, so their as­sess­ments in their re­spec­tive area re­ceived the most weight.

Fi­nally, we went through the new rank-or­dered list and de­cided who to fund and how much. We ini­tially as­signed grants val­ues to can­di­dates in rank or­der un­til we’d ex­hausted the fund­ing pool, then ad­justed amounts to fit the par­tic­u­lar cir­cum­stances of the grantees. Such con­sid­er­a­tions in­clude our cre­dence in the score given, the coun­ter­fac­tu­als of fund­ing each can­di­date, the po­ten­tial risks as­so­ci­ated with the can­di­date and/​or their pro­posal, and what can­di­dates could do with money on the mar­gin. We passed promis­ing can­di­dates who did not fit our char­i­ta­ble ob­jects or who re­quested money out of scope of our fund­ing ca­pac­ity onto some pri­vate donors as­so­ci­ated with CEA and/​or the rele­vant pro­gram officer at the Open Philan­thropy Pro­ject.

Through this pro­cess we se­lected 22 can­di­dates to fund, par­tially or in full, and passed an­other 11 on to the Open Philan­thropy Pro­ject.

Cost-benefit considerations

An im­por­tant con­sid­er­a­tion in our think­ing is whether or not the costs of run­ning EA Grants ex­ceed its benefits. Since the coun­ter­fac­tual is likely a fu­ture grant made by the Open Philan­thropy Pro­ject, one an­gle for eval­u­at­ing EA Grants is to com­pare its costs and benefits rel­a­tive to the dis­tri­bu­tion(s) Open Phil might have made oth­er­wise. CEA dis­tributed £600 per hour worked by the grants team, whereas we es­ti­mate Open Phil dis­tributes ~£20,000 per hour. How­ever, we think a com­par­i­son made in this way has limi­ta­tions.

Costs

The costs are the £500,000 dis­sem­i­nated, plus ~740 CEA staff hours thus far. We ex­pect to spend an­other 100 hours on ac­tivi­ties re­lated to this round of grantees, mostly ar­rang­ing men­tors and en­sur­ing fi­nan­cial reg­u­la­tory com­pli­ance. There have also been costs to other EA or­ga­ni­za­tions — mostly the Open Philan­thropy Pro­ject, who has de­cided to eval­u­ate and fund some of the grantees who went through the ap­pli­ca­tion pro­cess.

An Open Phil staff mem­ber made a rough guess that it takes them 13-75 hours per grant dis­tributed. Their av­er­age grant size is quite a bit larger, so it seems rea­son­able to as­sume it would take them about 25 hours to dis­tribute a pot the size of EA Grants. This, of course, ig­nores the time costs of in­sti­tu­tion-build­ing. Much of the time we spent in this fund­ing round went to build­ing the in­ter­nal grants in­fras­truc­ture and re­la­tion­ships with other fun­ders. Should we run this pro­ject again, we ex­pect to be able to run a similar grants pro­cess in a frac­tion of the time.

This ra­tio has limited mean­ing, most no­tably ig­nor­ing that Open Phil found this pro­ject com­pel­ling enough to fund. While they can dis­tribute more fund­ing per hour hav­ing achieved scale, we find it plau­si­ble that the ad­di­tional costs for these smaller pro­jects were still worth it on the mar­gin. The cost-per-dol­lar dis­tributed is less than that of other im­pact-fo­cused foun­da­tions, likely on par if we were to fac­tor in staff over­head time.

Benefits

The benefits are no­tably quite com­pli­cated to calcu­late. Any in­di­vi­d­ual pro­ject is it­self go­ing to be challeng­ing to eval­u­ate, since most of the value is likely to come from hard-to-track, long-term changes to sen­ti­ments and be­hav­ior. Rather than try to com­pute the value of each grant with a com­mon base met­ric, we have in­stead opted for pro­jects that seem ro­bustly pos­i­tive should they work. This, again, is not un­like Open Phil’s strat­egy; the real ques­tion is how effec­tive we think our dis­tri­bu­tions were com­pared to theirs.

It seems likely that we’ve picked up on value they would not have, given the scale of in­ter­ven­tions they gen­er­ally con­sider, which is more valuable per dol­lar than where they would have given. Rea­sons to be­lieve this in­clude:

  • Scal­ing po­ten­tial. By fund­ing early-stage pro­jects, many with plans to grow, the re­turns to fund­ing at this stage are higher var­i­ance but also higher po­ten­tial up­side.

  • In­ex­pen­sive salaries. Most peo­ple re­quested liv­ing wages at or be­low non­profit em­ployee salaries.

  • Fund­ing in­di­vi­d­u­als. Not only were salaries cheaper, but in­di­vi­d­u­als are cheaper. Or­ga­ni­za­tions of­ten spend 1.7x an em­ployee’s salary on over­head.

CEA’s coun­ter­fac­tu­als are un­clear. We are un­sure if CEA would have re­ceived the ad­di­tional money were EA Grants not in our plans. As­sum­ing that to be the case, Open Phil might have later granted the money to some other com­mu­nity-build­ing ac­tivity. Had CEA staff not worked on this pro­gram, we would have ac­cel­er­ated progress on writ­ing col­lated EA con­tent, built out the EA events in­fras­truc­ture, and worked on plans for EA aca­demic en­gage­ment. As for the pro­jects we funded, we es­ti­mate that about one quar­ter of the pro­jects wouldn’t have hap­pened at all, and the rest would have re­ceived less time, since the grantees would have pur­sued other fund­ing (from the Open Philan­thropy Pro­ject, or el­se­where) or self-funded by work­ing or go­ing into per­sonal debt.

Mistakes

Our com­mu­ni­ca­tion was con­fus­ing. We ini­tially an­nounced the pro­cess with lit­tle ad­ver­tise­ment. Then, we ad­ver­tised to the EA Newslet­ter, but only shortly be­fore the ap­pli­ca­tion dead­line, and ex­tended the dead­line by two days.

We un­der­es­ti­mated the num­ber of ap­pli­ca­tions we would re­ceive, which gave us less time per can­di­date in the ini­tial eval­u­a­tion than we would have liked. It also caused de­lays, which we did not ad­e­quately com­mu­ni­cate to ap­pli­cants. We should have been less am­bi­tious in set­ting our ini­tial dead­lines for re­ply­ing, and should have com­mu­ni­cated all changes in our timetable im­me­di­ately and in writ­ing to all ap­pli­cants.

Our ad­ver­tise­ment did not make suffi­ciently clear that we might not be able to fund ed­u­ca­tional ex­penses through CEA. For­tu­nately, the Open Philan­thropy Pro­ject was re­cep­tive to con­sid­er­ing some of the aca­demic ap­pli­cants.

Difficulties

Pro­ject evaluation

We found it hard to make de­ci­sions on first-round ap­pli­ca­tions that looked po­ten­tially promis­ing but were out­side of our in-house ex­per­tise. Many ap­pli­cants had pro­pos­als for stud­ies and char­i­ties we felt un­der-qual­ified to as­sess. Most of those ap­pli­cants we turned down; some we deferred to the rele­vant Open Phil pro­gram man­ager. We are in the pro­cess of es­tab­lish­ing re­la­tion­ships with do­main ex­perts who can help us do this in the fu­ture.

Con­flicts of interest

One difficulty in run­ning this pro­gram is its sus­cep­ti­bil­ity to con­flicts of in­ter­est (COIs).

Many of the most promis­ing ap­pli­ca­tions came from peo­ple who are already deeply in­volved with the com­mu­nity. In­volve­ment with the com­mu­nity gives us ev­i­dence of value-al­ign­ment, and the com­mu­nity also pro­vides a con­text within which it is eas­ier to come up with pro­pos­als that we think are im­por­tant.

Un­for­tu­nately, since many ap­pli­cants, and par­tic­u­larly many of the best, were deeply in­volved with the com­mu­nity, our as­sess­ing staff tended to have many COIs. This in­cludes one of the team mem­bers, who was both an grant eval­u­a­tor and an ap­pli­cant.

Rather than avoid giv­ing where COIs ex­isted, we adopted a view much like that of the Open Philan­thropy Pro­ject. You can see the de­tails ar­tic­u­lated in Holden Karnofsky’s post on hits-based giv­ing. We rec­og­nized and tried to miti­gate the effects of COIs by ask­ing for ex­pert in­put, ex­pect­ing do­main ex­per­tise to help cor­rect for per­sonal do­main-ir­rele­vant sen­ti­ments. How­ever, given our pro­cess-based ap­proach and com­par­a­tively limited in­ter­nal ca­pac­ity, it was both less nec­es­sary and less fea­si­ble for us to de­velop in-house ex­per­tise on ar­eas about which we formerly knew lit­tle, an­other means of re­duc­ing the im­pact of COIs.

The mea­sures we took in­clude:

  • Blind­ing ap­pli­ca­tions dur­ing the first and sec­ond rounds of the ap­pli­ca­tion pro­cess, such that all writ­ten ap­pli­ca­tions re­ceived scores while anonymized.

  • Ask­ing staff mem­bers to de­clare con­flicts of in­ter­est with fi­nal­ists, where they ex­isted. The team then found re­place­ment in­ter­view­ers and asked the as­so­ci­ated staff mem­ber to step out of de­ci­sion­mak­ing for those can­di­dates.

  • Defer­ring ap­pli­ca­tions to staff of the Open Philan­thropy Pro­ject when the pro­ject pro­pos­als were out­side our do­mains of ex­per­tise.

  • Estab­lish­ing scores in our rubric as­so­ci­ated with ob­serv­able mea­sures, ty­ing ap­pli­cants’ scores to spe­cific fea­tures of their abil­ities and plans rather than our gen­eral im­pres­sion.

  • For the grant ap­pli­cant who was also an as­ses­sor, re­mov­ing him from all dis­cus­sions about his ap­pli­ca­tion, ob­scur­ing his score and rank­ing, and sub­ject­ing him to the same eval­u­a­tion pro­cess as all other grantees.

Pro­ject changes

We can’t fund ed­u­ca­tional ex­penses.

We ad­hered closely to our grant ar­eas, fund­ing noth­ing out of scope of what we de­scribed on the web­site. How­ever, we have since de­ter­mined that we can­not fund all of the pro­jects for which we en­couraged peo­ple to ap­ply. Most no­tably, CEA’s char­i­ta­ble ob­jects do not al­low us to pay for ed­u­ca­tion ex­penses, mak­ing it im­pos­si­ble for us to give grants for Masters or PhD pro­grams. How­ever, the Open Philan­thropy Pro­ject is able to do so and has started to con­sider fund­ing can­di­dates pur­su­ing re­search in their pri­or­ity ar­eas.

We are un­likely to make grants for longer than a year.

While we offered op­por­tu­ni­ties for grant re­newal, we didn’t make any grants last­ing more than a year. This was more a re­sult of hap­pen­stance than an in­ten­tional de­ci­sion. For the few fi­nal­ists who re­quested more than a year of fund­ing, we were suffi­ciently un­sure of ei­ther their pro­posal or their fu­ture fund­ing situ­a­tion so as to not want to com­mit more than a year up­front. That be­ing said, we’re still open to do­ing so in the fu­ture.

Plans go­ing forward

It seems likely that we will run a similar pro­gram in the fu­ture. Kerry Vaughan has just taken over own­er­ship of this pro­ject, and will be in charge of de­cid­ing on and im­ple­ment­ing changes. That be­ing said, the ini­tial EA Grants team has many ideas of how to im­prove the scheme, and in par­tic­u­lar how to solve the mis­takes dis­cussed above. We will co­or­di­nate with Kerry and post again when we have more in­for­ma­tion.

As I will no longer work on this pro­ject af­ter Oc­to­ber 6th, please di­rect ques­tions and com­ments to ea­grants@cen­tre­fore­ffec­tivealtru­ism.org.

Thanks to Ryan Carey, Ro­hin Shah, and Vipul Naik for cor­rec­tions to this post.