Request for comments: EA Projects evaluation platform

Edit: It is likely there will be a sec­ond ver­sion of this pro­posal, mod­ified based on the feed­back and com­ments.

The effec­tive al­tru­ism com­mu­nity has a great re­source—its mem­bers, mo­ti­vated to im­prove the world. Within the com­mu­nity, there are many ideas float­ing around, and en­trepreneuri­ally-minded peo­ple keen to ex­e­cute on them. As the com­mu­nity grows, we get more effec­tive al­tru­ists with differ­ent skills, yet in other ways it be­comes harder to start pro­jects. It’s hard to know who to trust, and hard to eval­u­ate which pro­ject ideas are ex­cel­lent , which are prob­a­bly good, and which are too risky for their es­ti­mated re­turn.

We should be con­cerned about this: the effec­tive al­tru­ism brand has a sig­nifi­cant value, and bad pro­jects can have reper­cus­sions for both the per­cep­tion of the move­ment and the whole com­mu­nity. On the other hand, if good pro­jects are not started, we miss out on value, and miss op­por­tu­ni­ties to de­velop new lead­ers and man­agers. More­over, in­effi­cien­cies in this space can cause re­sent­ment and con­fu­sion among peo­ple who re­ally want to do good and have lots of tal­ent to con­tribute.

There’s also a dan­ger that as a com­mu­nity we get stuck on the old core prob­lems, be­cause fun­ders and re­searchers trust cer­tain groups to do cer­tain things, but lack the ca­pac­ity to vet new and riskier ideas, and to figure out which new pro­jects should form. Over­all, effec­tive al­tru­ism strug­gles to use its great­est re­source—effec­tive peo­ple. Also, while we talk about “cause X”, cur­rently new causes may strug­gle to even get se­ri­ous at­ten­tion.

One idea to ad­dress this prob­lem, pro­posed in­de­pen­dently at var­i­ous times by me and sev­eral oth­ers, is to cre­ate a plat­form which pro­vides scal­able feed­back on pro­ject ideas. If it works, it could be­come an effi­cient way to sep­a­rate sig­nal from noise and spread trust as our com­mu­nity grows. In the best case, such a plat­form could help alle­vi­ate some of the bot­tle­necks the EA com­mu­nity faces, har­ness more tal­ent and en­ergy than we are cur­rently able to do, and make it eas­ier for us to make in­vest­ments in smaller, more un­cer­tain pro­jects with high po­ten­tial up­side.

As dis­cussed in a pre­vi­ous post, What to do with peo­ple, I see cre­at­ing new net­work-struc­tures and ex­tend­ing ex­ist­ing ones as one pos­si­ble way to scale. Cur­rently, effec­tive al­tru­ists use differ­ent ap­proaches to get feed­back on pro­ject pro­pos­als de­pend­ing on where they are situ­ated in the net­work: there is no ready-made solu­tion that works for them all.

For effec­tive al­tru­ists in the core of the net­work, the best method is of­ten just to share a google doc with a few rele­vant peo­ple. Out­side the core, the situ­a­tion is quite differ­ent, and it may be difficult to get in­for­ma­tive and hon­est feed­back. For ex­am­ple, since ap­pli­ca­tions out­num­ber available bud­get slots, by de­sign most grant ap­pli­ca­tions for new pro­jects are re­jected; prac­ti­cal and le­gal con­straints mean that these re­jec­tions usu­ally come with­out much feed­back, which can make it difficult to im­prove the pro­pos­als. (See also EA is vet­ting-con­strained)

For all of these rea­sons, I want to start an EA pro­jects eval­u­a­tion plat­form. For peo­ple with a pro­ject idea, the plat­form will provide in­de­pen­dent feed­back on the idea, and an es­ti­mate of the re­sources needed to start the pro­ject. In a sep­a­rate pro­cess, the plat­form would also provide feed­back on pro­jects fur­ther in their life, eval­u­at­ing team and idea fit. For fun­ders, it can provide an in­de­pen­dent source of anal­y­sis.

What fol­lows is a pro­posal for such a plat­form. I’m in­ter­ested in feed­back and sug­ges­tions for im­prove­ment: the plan is to launch a cheap ex­per­i­men­tal run of the eval­u­a­tion pro­cess in ap­prox­i­mately two weeks. I’m also look­ing for vol­un­teer eval­u­a­tors.

Eval­u­a­tion process

Pro­ject ideas will get eval­u­ated in a multi-step pro­cess:

1a. Screen­ing for in­fo­haz­ards, pro­pos­als out­side of the scope of effec­tive al­tru­ism, or oth­er­wise ob­vi­ously un­suit­able pro­pos­als (ca 15m /​ pro­ject)

1b. Peer re­view in a de­bate frame­work. Two refer­ees will write eval­u­a­tions, one fo­cus­ing on the pos­si­ble nega­tives, costs and prob­lems of the pro­posal; and the other on the benefits. Both refer­ees will also sug­gest what kind of re­sources a team at­tempt­ing the pro­ject should have. (2-5h /​ an­a­lyst /​ pro­ject)

1c. Both the pro­posal and the re­views will get anony­mously pub­lished on the EA fo­rum, gath­er­ing pub­lic feed­back for about one week. This step will also al­low back-and-forth com­mu­ni­ca­tion with the pro­ject ini­tia­tor.

1d. A panel will rate the pro­posal, uti­liz­ing the in­for­ma­tion gath­ered in phases b. and c., high­light­ing which part of the anal­y­sis they con­sider par­tic­u­larly im­por­tant. (90m /​ pro­ject)

1e. In case of dis­agree­ment among the panel, the ques­tion will get es­ca­lated and dis­cussed with some of the more se­nior peo­ple in the field.

1f. The re­sults will get pub­lished, prob­a­bly both on the EA pro­jects plat­form web­site, and on the fo­rum.

In a pos­si­ble sec­ond stage, if a team forms around a pro­ject idea, it will go through similar eval­u­a­tion, fo­cus­ing on the fit be­tween the team and the idea, pos­si­bly with the ad­di­tional step of a panel of fore­cast­ers pre­dict­ing the suc­cess prob­a­bil­ity and ex­pected im­pact of the pro­ject over sev­eral time hori­zons.

Cur­rently, the plan is to run a limited test of the vi­a­bil­ity of the ap­proach, on a batch of 10 pro­ject ideas, go­ing through steps 1a-f.

Why this par­tic­u­lar eval­u­a­tion process

The most bot­tle­necked re­source for eval­u­a­tions, apart from struc­ture, is likely the time of ex­perts. This pro­cess is de­signed to uti­lize the time of ex­perts in a more lev­er­aged way, uti­lize the in­puts from the broader com­mu­nity, and also to pro­mote high-qual­ity dis­cus­sion on the EA fo­rum. (Cur­rently, prob­le­matic pro­ject pro­pos­als posted on the fo­rum of­ten at­tract down­votes, but rarely de­tailed feed­back.)

Hav­ing two “op­pos­ing” re­views at­tempts to avoid the so­cial costs of not be­ing nice: by hav­ing clear roles, ev­ery­one will un­der­stand that writ­ing an anal­y­sis which tries to find flaws and prob­lems was part of the job. Also, it can pro­voke higher qual­ity pub­lic dis­cus­sion.

Split­ting steps b.,c. and d. is mo­ti­vated by the fact that map­ping ar­gu­ments is a differ­ent task than judg­ing them.

Pro­ject ideas are on a spec­trum where some are rel­a­tively ro­bust to the choice of team, while the im­pact of other pro­jects may mostly de­pend on the qual­ity of the team, in­clud­ing the sign of the im­pact. By split­ting the eval­u­a­tion of ideas from the eval­u­a­tion of (idea+team), it should be pos­si­ble to com­mu­ni­cate opinions like “this is a good idea, but you are likely not the best team to try it” with more nu­ance.

Over­all the de­sign space of pos­si­ble eval­u­a­tion pro­cesses is large, and I be­lieve it may just be eas­ier to run an ex­per­i­ment and iter­ate. Based on the re­sults, it should be rel­a­tively easy to make some steps from 1.a-e sim­pler, omit them al­to­gether, or make them more rigor­ous. Also the stage 2 pro­cess can be de­signed based on the stage 1 re­sults.


I’m look­ing look­ing for 5-8 vol­un­teer an­a­lysts, who will write the re­views for the sec­ond step (1b) of the pro­cess. The role is suit­able for peo­ple with similar skills to gen­er­al­ist re­search an­a­lyst at OpenPhil, such as:

Ex­pected time-com­mit­ment is about 15-20h for the first run of eval­u­a­tions, and if the pro­ject con­tinues, about 15-20h per month. The work will mostly hap­pen on­line in a small team, com­mu­ni­cat­ing on Slack. There isn’t any re­mu­ner­a­tion, but I hope there will be events like a din­ner dur­ing EA Global, or similar op­por­tu­ni­ties to meet.

Good rea­sons to volunteer

  • you want to help with alle­vi­at­ing an im­por­tant bot­tle­neck in the EA pro­ject ecosystem

  • the work ex­pe­rience should be use­ful if you are con­sid­er­ing work­ing as a grant eval­u­a­tor, an­a­lyst, or similar

Bad rea­sons to volunteer

you feel some spe­cific pro­ject by you or your friends was un­de­servedly re­jected by ex­ist­ing grant-mak­ing or­ga­ni­za­tions, and you want help the pro­ject

Strong rea­son not to volunteer

  • there is a high chance you will flake out from vol­un­tary work even if you com­mit to do it

If you want to join, please send your linkedin/​CV and a short para­graph-long de­scrip­tion of your in­volve­ment with effec­tive al­tru­ism to eapro­ject­


In the first trial, I’d like to test the vi­a­bil­ity of the pro­cess on about 10 pro­ject ideas. You may want to pro­pose a pro­ject idea ei­ther where you would be in­ter­ested in run­ning the pro­ject or in cases where you would want some­one else to lead the pro­ject, with you helping e.g. via ad­vice or fund­ing. At pre­sent, it prob­a­bly isn’t very use­ful to pro­pose pro­jects you don’t plan to sup­port in some sig­nifi­cant way.

It is im­por­tant to un­der­stand that the eval­u­a­tions ab­solutely do not come with any promise of fund­ing. I would ex­pect the eval­u­a­tions may help pro­ject ideas which come out with pos­i­tive feed­back from the pro­cess, be­cause fun­ders or EAs earn­ing to give or po­ten­tial vol­un­teers or co-founders may pick up the sig­nal. Nega­tive feed­back may help with im­prov­ing the pro­jects, or hav­ing re­al­is­tic ex­pec­ta­tions about nec­es­sary re­sources. There is also value in bad pro­jects not hap­pen­ing, and nega­tive feed­back can help peo­ple to move on from dead-end pro­jects to more valuable things.

Also it should be clear that the pro­ject eval­u­a­tions will not con­sti­tute any offi­cial “seal of ap­proval” - this is a test run of vol­un­teer pro­ject and has not been for­mally en­dorsed by any par­tic­u­lar or­ga­ni­za­tion.

I’d like to thank Max Daniel, Rose Had­shar, Ozzie Gooen, Max Dal­ton, Owen Cot­ton-Bar­ratt, Oliver Habryka, Harri Besceli, Ryan Carey, Jah Ying Chung and oth­ers for helpful com­ments and dis­cus­sions on the topic.