2017 Donor Lottery Report

I am the win­ner of the 2017 donor lot­tery. This write-up doc­u­ments my de­ci­sion pro­cess. The pri­mary in­tended au­di­ence are other donors: sev­eral of the or­gani­sa­tions I de­cided to donate to still have sub­stan­tial fund­ing gaps. I also ex­pect this to be of in­ter­est to in­di­vi­d­u­als con­sid­er­ing work­ing for one of the or­gani­sa­tions re­viewed.

To re­cap, in a donor lot­tery many in­di­vi­d­u­als make small con­tri­bu­tions. The ac­cu­mu­lated sum is then dis­tributed to a ran­domly se­lected par­ti­ci­pant. Your prob­a­bil­ity of win­ning is pro­por­tional to the amount donated, such that the ex­pected amount of dona­tions you con­trol is the same as the amount you con­tribute. This is ad­van­ta­geous since the win­ner (given the ex­tra work, ar­guably the “loser”) of the lot­tery can jus­tify spend­ing sub­stan­tially more time eval­u­at­ing or­gani­sa­tions than if he or she were con­trol­ling only their smaller per­sonal dona­tions.

In 2017, the Cen­tre for Effec­tive Altru­ism ran a donor lot­tery, and I won one of the two blocks of $100,000. After care­ful de­liber­a­tion, I recom­mended that CEA make the fol­low­ing re­grants:

In the re­main­der of this doc­u­ment, I de­scribe the se­lec­tion pro­cess I used, and then provide de­tailed eval­u­a­tions of each of these or­gani­sa­tions.

Selec­tion Process

I am a CS PhD stu­dent at UC Berkeley, work­ing to de­velop re­li­able ar­tifi­cial in­tel­li­gence. Prior to start­ing my PhD, I worked in quan­ti­ta­tive fi­nance. This doc­u­ment is in­de­pen­dent work and is not en­dorsed by CEA, the or­gani­sa­tions eval­u­ated, or by my cur­rent or pre­vi­ous em­ploy­ers.

I as­sign com­pa­rable value to fu­ture and pre­sent lives, place sig­nifi­cant weight on an­i­mal welfare (with high un­cer­tainty) and am risk neu­tral. I have some moral un­cer­tainty but would en­dorse these state­ments with >90% prob­a­bil­ity. More­over, I largely en­dorse the stan­dard ar­gu­ments re­gard­ing the over­whelming im­por­tance of the far fu­ture.

Since I am mostly in agree­ment with ma­jor donors, no­tably Open Philan­thropy, I tried to fo­cus on ar­eas that are the com­par­a­tive ad­van­tage of smaller donors. In par­tic­u­lar, I fo­cused my in­ves­ti­ga­tion on small or­gani­sa­tions with a sig­nifi­cant fund­ing gap.

To gen­er­ate an ini­tial list of pos­si­ble or­gani­sa­tions, I (a) wrote down or­gani­sa­tions that im­me­di­ately came to mind, (b) so­lic­ited recom­men­da­tions from trusted in­di­vi­d­u­als in my net­work, and (c) re­viewed the list of 2017 EA grant re­cip­i­ents. I short­listed four or­gani­sa­tions from a su­perfi­cial re­view of the longlist. Ul­ti­mately all the or­gani­sa­tions on my short­list were also or­gani­sa­tions that im­me­di­ately came to my mind in (a). This ei­ther in­di­cates I already had a good un­der­stand­ing of the space, or that I am poor at up­dat­ing my opinion.

I then con­ducted a de­tailed re­view of each of the short­listed or­gani­sa­tions. This in­cluded read­ing a rep­re­sen­ta­tive sam­ple of their pub­lished work, so­lic­it­ing com­ments from in­di­vi­d­u­als work­ing in re­lated ar­eas, and dis­cus­sion with staff at the or­gani­sa­tion un­til I felt I had a good un­der­stand­ing of their strat­egy. In the next sec­tion, I sum­marise my cur­rent views on the short­listed or­gani­sa­tions.

The or­gani­sa­tions eval­u­ated were pro­vided with a draft of this doc­u­ment and given 14 days to re­spond prior to pub­li­ca­tion. I have cor­rected any mis­takes brought to my at­ten­tion, and have also in­cluded a state­ment from ALLFED; other or­gani­sa­tions were pro­vided with the op­tion to in­clude a state­ment but chose not to do so. Some con­fi­den­tial de­tails have been with­held, ei­ther at the re­quest of the or­gani­sa­tion or the in­di­vi­d­ual who pro­vided the in­for­ma­tion.

Sum­mary of conclusions

I ranked ALLFED above GCRI as I view their re­search agenda as hav­ing a clearer di­rect path for im­pact and greater room for growth. GCRI’s work in­ten­tion­ally spans a wide range of catas­trophic risks. I was most im­pressed by their work on mod­el­ling nu­clear risk. How­ever, while hav­ing bet­ter mod­els is use­ful for cause pri­ori­ti­sa­tion, I am scep­ti­cal of its abil­ity to di­rectly in­fluence policy mak­ers de­ci­sions, es­pe­cially elected offi­cials. By con­trast, I find it plau­si­ble that ALLFED will make sig­nifi­cant progress on de­vel­op­ing al­ter­na­tive foods, miti­gat­ing a sig­nifi­cant frac­tion of the nega­tive effects of a nu­clear war. The ar­eas I am most un­cer­tain of in this eval­u­a­tion are the pos­si­ble down­side risks of ALLFED, in­clud­ing out­reach mishaps and moral haz­ards.

I ranked GCRI above AI Im­pacts as AI Im­pacts core staff are ad­e­quately funded, and I am scep­ti­cal of their abil­ity to re­cruit ad­di­tional qual­ified staff mem­bers. I would favour AI Im­pacts over GCRI if they had qual­ified can­di­dates they wanted to hire but were bot­tle­necked on fund­ing. How­ever, my hunch is that in such a situ­a­tion they would be able to read­ily raise fund­ing, al­though it may be that hav­ing an ad­e­quate fund­ing re­serve would sub­stan­tially sim­plify re­cruit­ment.

I rank AI Im­pacts above WASR due to a long-term fu­ture out­look. I find WASR slightly more com­pel­ling by the lights of a near-term an­i­mal welfare cen­tric out­look than I find any of the other or­gani­sa­tions un­der a long-term fu­ture out­look, but the differ­ence is not sub­stan­tial enough to be rele­vant given my fairly low lev­els of moral un­cer­tainty.

If I had an ad­di­tional $100k to donate, I would first check AI Im­pacts cur­rent re­cruit­ment situ­a­tion; if there are promis­ing hires that are bot­tle­necked on fund­ing, I would likely al­lo­cate it there. Other­wise, I would split it equally be­tween ALLFED and GCRI. In par­tic­u­lar, I recom­mend a pro­por­tion­ally greater al­lo­ca­tion to GCRI than I made. My dona­tion to ALLFED in­creased their 2018 rev­enue by 50%: al­though they have ca­pac­ity to uti­lize ad­di­tional funds, I ex­pect there to be some diminish­ing re­turns. I am also happy to dis­cuss my thoughts fur­ther with other donors con­sid­er­ing sup­port­ing these or­gani­sa­tions.



ALLFED is a non-profit con­duct­ing re­search and out­reach for al­ter­na­tive food sources to be used in the event of a mass agri­cul­tural catas­tro­phe. Their pri­mary goal is to de­velop al­ter­na­tive food sources to al­low a large frac­tion of hu­man­ity to sur­vive a nu­clear win­ter sce­nario. Even in the worst case of a full-scale US-Rus­sia nu­clear in­ter­change, the ma­jor­ity of the hu­man pop­u­la­tion would be out­side the blast ra­dius of nu­clear deto­na­tions, liv­ing in re­mote ru­ral ar­eas or in non-com­bat­ant coun­tries. How­ever, there would be many in­di­rect nega­tive effects that im­pact ev­ery­one, in­clud­ing de­struc­tion of in­dus­trial ca­pac­ity, gov­ern­ment dis­rup­tion and agri­cul­tural failures from nu­clear win­ter. ALLFED is at­tempt­ing to provide tech­ni­cal solu­tions to the last of these prob­lems, by de­vel­op­ing food sources that can be grown even with limited sun­light.

They have around 3.5 full-time equiv­a­lent staff, con­sist­ing of sev­eral part-time em­ploy­ees and a di­verse set of vol­un­teers. Their co-founder and pri­mary re­searcher, Prof. David Denken­berger, splits his time be­tween ALLFED and teach­ing at the Univer­sity of Alaska Fair­banks. So­nia Cas­sidy is in charge of op­er­a­tions, with a back­ground in so­cial en­ter­prise and busi­ness con­ti­nu­ity.

They have a limited bud­get, with a pro­jected rev­enue of $215,000 in 2018 (in­clud­ing my dona­tion). The most likely use of ad­di­tional fund­ing would be to hire ad­di­tional ju­nior em­ploy­ees/​con­trac­tors, or provide re­search grants to aca­demics to work on re­lated top­ics. Sev­eral of their cur­rent vol­un­teers they would like to hire as con­trac­tors on a part-time ba­sis. David could also buy him­self out of teach­ing, and so be able to ded­i­cate al­most all his time to ALLFED.

My perspective

ALLFED is a young or­gani­sa­tion in an untested cause area. There are three main ar­eas of un­cer­tainty: (a) the like­li­hood of nu­clear ex­change and con­se­quent nu­clear win­ter; (b) the tractabil­ity of al­ter­na­tive foods; (c) the man­age­rial ca­pac­ity for ALLFED to scale as an or­ga­ni­za­tion.

There are few pub­li­cly available quan­ti­ta­tive mod­els of the risk of nu­clear war. Bar­rett et al (2013) is the most com­pre­hen­sive study I’m aware of, al­though its es­ti­mates should likely be up­dated down­wards due to the his­tor­i­cal ab­sence of nu­clear war. It seems likely that a full-scale US-Rus­sia nu­clear ex­change would trig­ger a nu­clear win­ter, al­though this has not been ad­e­quately stud­ied. Over­all, I think the chance of nu­clear win­ter hap­pen­ing is low but still high enough to be one of the top five global catas­trophic risks. Although nu­clear se­cu­rity as a whole at­tracts sig­nifi­cant gov­ern­ment at­ten­tion and in­vest­ment, nu­clear win­ter miti­ga­tion is highly ne­glected, and I am not aware of any or­gani­sa­tion be­sides ALLFED work­ing on al­ter­na­tive foods.

Although it is hard for me to as­sess as an out­sider, there ap­pear to be a num­ber of tractable re­search path­ways to de­vel­op­ing al­ter­nate foods. One of these is methane (nat­u­ral gas) di­gest­ing bac­te­ria, a tech­nol­ogy be­ing de­vel­oped at an in­dus­trial scale as a feed source for an­i­mal agri­cul­ture by star­tups in­clud­ing Unibio and Ca­lysta. ALLFED are cur­rently ex­per­i­ment­ing with grow­ing methane di­gest­ing bac­te­ria at the house­hold scale. This would be most in­fluen­tial in a no-sun, no-in­dus­try sce­nario (note that gas will con­tinue to bleed out of wells), but is also a hedge against a high cost of retrofitting ex­ist­ing chem­i­cal plants.

The out­come of these ex­per­i­ments will give con­sid­er­able in­sight into ALLFED’s abil­ity as an or­ga­ni­za­tion to con­duct novel re­search in this space. While I have been im­pressed by David’s other re­search out­put, he is not an agronomist nor biol­o­gist, and has a limited track-record in this area. Fi­nan Adam­son, a re­cent hire who will be work­ing on the methane-di­ges­tion ex­per­i­ment, does have a back­ground in biol­ogy but has lit­tle re­search ex­pe­rience. Ac­cord­ingly, I find it likely that ALLFED will ini­tially fail to make progress in this area.

ALLFED is small with the ma­jor­ity of their re­search still be­ing con­ducted by David. As such, there are se­ri­ous ques­tions re­gard­ing their abil­ity to re­cruit and man­age new hires. Fur­ther­more, many of the skills needed by ALLFED (such as gen­er­al­ist re­searchers and op­er­a­tions ca­pac­ity) are in high-de­mand at other EA or­gani­sa­tions, rais­ing a ques­tion as to the op­por­tu­nity cost of any hires they make. ALLFED are, how­ever, op­ti­mistic about be­ing able to re­cruit from out­side the EA tal­ent pool.

I am of the opinion that there is in fact a large amount of un­tapped tal­ent, both within the effec­tive al­tru­ism com­mu­nity and amongst other re­searchers who would be in­ter­ested in this space. Effec­tive al­tru­ism is not so much tal­ent con­strained as or­gani­sa­tion con­strained: there need to be more en­vi­ron­ments where ju­nior hires can de­velop their skills. David is happy to spend his time men­tor­ing new re­searchers in this area, and so ALLFED is well placed to help fill this gap.

Pos­si­ble downsides

ALLFED has a “two-pronged strat­egy”, perform­ing R&D to im­prove long-term re­silience, but also perform­ing out­reach now in case a dis­aster hap­pens soon. Ex­am­ples of out­reach:

  • Talk­ing to se­nior civil ser­vice offi­cials.

  • In­di­vi­d­ual dis­cus­sions with se­nior agro­nomics re­searchers, in­clud­ing ex­plor­ing col­lab­o­ra­tion op­por­tu­ni­ties.

  • News cov­er­age, e.g. this in­ter­view in Science.

I tend to think of out­reach to gov­ern­ment offi­cials as be­ing pre­ma­ture: at this time ALLFED has lit­tle con­crete ad­vice to offer gov­ern­ment offi­cials or the broader pub­lic. ALLFED dis­agree with this, ar­gu­ing that gov­ern­ments hav­ing any re­sponse plan to a 10% or 100% agri­cul­tural short­fall would sub­stan­tially im­prove the out­come. In par­tic­u­lar, ALLFED feels that at the 10% level, there a a num­ber of shovel-ready in­ter­ven­tions, such as pre-com­mit­ments to trade be­tween na­tions and util­is­ing agri­cul­tural food resi­dues and do­mes­tic food waste (via mu­ni­ci­pal col­lec­tion) as food for ru­mi­nant di­gesters. More­over, ALLFED hope that by rais­ing aware­ness of al­ter­na­tive foods amongst poli­cy­mak­ers, this may spur gov­ern­ments into fund­ing re­search or con­duct­ing their own in­ves­ti­ga­tions into al­ter­na­tive foods. How­ever, ALLFED agree there has been no tan­gible re­sults from gov­ern­ment out­reach so far, al­though they feel the re­sponse of offi­cials has been broadly pos­i­tive.

I am also con­cerned about me­dia cov­er­age, as in gen­eral I think it is challeng­ing to have a high-qual­ity pub­lic dis­cus­sion about low-prob­a­bil­ity high-im­pact events such as global catas­trophic risks, with cov­er­age tend­ing to be ei­ther alarmist or dis­mis­sive. In the case of al­ter­na­tive foods, I would be con­cerned about it be­com­ing per­ceived by rele­vant fields as an agenda be­ing pushed by out­siders. There is also a risk of be­com­ing as­so­ci­ated with fringe com­mu­ni­ties such as sur­vival­ists. Both of these could harm the long-run de­vel­op­ment of al­ter­na­tive foods, and pos­si­bly even hin­der ac­tion on other global-catas­trophic risks. How­ever, ALLFED has re­cently shown en­courag­ing signs of in­te­grat­ing with the agron­omy com­mu­nity: they now have a pub­li­ca­tion in Agri­cul­ture, and a re­searcher in sus­tain­able agri­cul­ture, Dr Shack­elford, has joined their board.

An ad­di­tional risk is that de­vel­op­ment of al­ter­na­tive foods might cause moral haz­ard amongst nu­clear de­ci­sion mak­ers. The threat of a nu­clear win­ter had a large in­fluence on the per­cep­tion of nu­clear war amongst the pub­lic and policy mak­ers. In­deed, Gor­bachev stated in a 2000 in­ter­view that “mod­els made by Rus­sian and Amer­i­can sci­en­tists showed that a nu­clear war would re­sult in a nu­clear win­ter that would be ex­tremely de­struc­tive to all life on Earth; the knowl­edge of that was a great stim­u­lus to us, to peo­ple of honor and moral­ity, to act in that situ­a­tion.” This feels to me like a sec­ond-or­der con­sid­er­a­tion: even with al­ter­na­tive foods, a nu­clear win­ter would still mean the de­struc­tion of na­tions in­volved in the nu­clear in­ter­change, and the death of hun­dreds of mil­lions. But it does provide an ad­di­tional rea­son to clearly com­mu­ni­cate both the ca­pa­bil­ities and limi­ta­tions of al­ter­na­tive foods to the pub­lic and policy mak­ers.

Suc­cess metrics

I view ALLFED as be­ing an ex­cit­ing seed fund­ing op­por­tu­nity. How­ever, I would like to see sub­stan­tially more ev­i­dence of progress be­fore con­sid­er­able fund­ing (be­yond $1 mil­lion) is pro­vided. In par­tic­u­lar, I would want to see im­prove­ment on some (but not all) of the fol­low­ing met­rics:

  • Progress on con­crete al­ter­na­tive foods re­search: e.g. the methane di­gest­ing bac­te­ria pro­ject above. This work needn’t have a pos­i­tive out­come: I’d be al­most as ex­cited about nega­tive re­sults if valuable les­sons are learned from it.

  • Devel­op­ment of a con­crete re­search agenda, re­flect­ing what they be­lieve are the most im­pact­ful and tractable ex­per­i­ments to con­duct in the near-fu­ture. This is in com­par­i­son to their ex­ist­ing work, which has fo­cused on high-level cost effec­tive­ness analy­ses for al­ter­na­tive foods and long, su­perfi­cially de­vel­oped lists of pos­si­ble in­ter­ven­tions.

    I recom­mend this in par­tic­u­lar since, de­spite read­ing their pub­lic ma­te­rial and hav­ing sev­eral dis­cus­sions with ALLFED, I am still un­cer­tain what their im­me­di­ate next steps are. Creat­ing a con­crete re­search pro­posal would at the least make re­cruit­ment and fundrais­ing eas­ier, and might also help set an in­ter­nal di­rec­tion for the or­gani­sa­tion. Of course, I would ex­pect and en­courage de­vi­a­tions from this pro­posal as ALLFED learns more about the area.

  • Re­cruit­ment or set­ting up col­lab­o­ra­tions with re­searchers from rele­vant back­grounds, e.g. in biol­ogy or agri­cul­ture.

I would also want to check that the fol­low­ing mis­takes do not oc­cur. To clar­ify, I do not ex­pect ALLFED to make these mis­takes; I merely be­lieve these are the most salient risks for this cause area:

  • Low-qual­ity out­put that could ham­per de­vel­op­ment of the field, e.g. cost-effec­tive­ness analy­ses with se­ri­ous er­rors or re­search out­put that is viewed as low-qual­ity by those in the rele­vant field.

  • Outreach (whether to poli­cy­mak­ers, aca­demics or the me­dia) that leaves a bad im­pres­sion of al­ter­na­tive foods or global catas­trophic risks.

  • Re­cruit­ment or sup­port of in­di­vi­d­u­als who have a poor track record in the above ar­eas.


I am some­what more ex­cited about ALLFED than GCRI since their re­search agenda seems more di­rectly im­pact­ful and there is a clearer path­way for growth. How­ever, I see more down­side risks to ALLFED, and in par­tic­u­lar would ex­pect GCRI to be in a bet­ter po­si­tion to work pro­duc­tively with gov­ern­ments. ALLFED has a large team of vol­un­teers, which in­creases rep­u­ta­tional risks. I view sup­port for ALLFED at this stage as mostly a test of the tractabil­ity of R&D in this area, and to en­able them to con­tinue to build rele­vant col­lab­o­ra­tions.

State­ment from ALLFED

David Denken­berger, the founder of ALLFED, pro­vided the fol­low­ing com­ments on a draft of this re­port:

Thank you, Adam, for both the grant and your care­ful con­sid­er­a­tion (and our thanks also to ev­ery­one who took part in the EA Lot­tery in the first place). We ap­pre­ci­ate the feed­back and the pro­cess, as it has been use­ful to work on this over the course of sev­eral months. The dis­cus­sions we have had have helped to crys­tal­ise ideas.

As “a young or­gani­sa­tion in an untested cause area,” which we are, we also ap­pre­ci­ate this op­por­tu­nity to high­light the cause area it­self as much as our par­tic­u­lar or­gani­sa­tion. It is our be­lief that the area of al­ter­na­tive foods de­vel­op­ment has not yet re­ceived nearly as much at­ten­tion as it de­serves, es­pe­cially given its cost effec­tive­ness and the po­ten­tial to con­tribute to both re­cov­ery from catas­tro­phes as well as many prob­lems fac­ing the world to­day.

The key challenges of re­search­ing, test­ing, and de­vel­op­ing vi­able al­ter­na­tive food solu­tions, while also pro­vid­ing a solid or­gani­sa­tional struc­ture, are valid and some­thing we re­main con­scious of. For this rea­son, from the be­gin­ning, we have been com­mit­ted to steady, sus­tain­able growth, which al­lows for more and more re­search as well as for suffi­cient time to put a strong or­gani­sa­tional frame­work in place (with a new in­tern­ship pro­gramme the most re­cent ad­di­tion to it).

We com­pletely agree that me­dia han­dling is a del­i­cate mat­ter at the best of times, but this is par­tic­u­larly so when it comes to sen­si­tive and/​or new sub­jects. At the same time, we con­sider it es­sen­tial that any or­gani­sa­tion, re­gard­less of its field of work, be ca­pa­ble of han­dling such in­ter­est in an ap­pro­pri­ate man­ner, es­pe­cially if there is a high pos­si­bil­ity of mis­com­mu­ni­ca­tion and mis­in­ter­pre­ta­tion. As such, at ALLFED, we pre­fer to be able to en­gage with en­quiries in a con­struc­tive man­ner, rather than not at all. We en­courage oth­ers to con­tact us when in need of our as­sis­tance or ex­per­tise, par­tic­u­larly in the event of a catas­tro­phe.

In terms of ALLFED’s re­search agenda, we have started scop­ing out the most promis­ing ex­per­i­ments at var­i­ous fund­ing lev­els. On the lower end, there is ap­prox­i­mately $50,000 for house­hold scale nat­u­ral gas eat­ing bac­te­ria ex­per­i­ments. On the higher end, there is >$1 mil­lion for flex­ible biore­fin­ery de­sign that could turn crop leaves into fuel or food which would in­volve ex­perts in biofuels, mush­rooms, chick­ens, and ru­mi­nants. Pro­jects such as these will be im­ple­mented ac­cord­ing to the re­sources available. We have already es­tab­lished sev­eral po­ten­tial part­ner­ships around this, so peo­ple with trans­fer­able skills can be redi­rected to­wards this effort with more fund­ing. We would wel­come fur­ther en­quiries and op­por­tu­ni­ties for col­lab­o­ra­tion.

Col­lab­o­ra­tion, hope, and care are at the heart of ALLFED’s vi­sion and our work. We be­lieve that with ad­vanced plan­ning, re­search, col­lab­o­ra­tion, and com­mu­ni­ca­tions it is both plau­si­ble and pos­si­ble to provide for all (or most) hu­mans and to help pre­serve bio­di­ver­sity in the event of a catas­tro­phe. The EA Lot­tery grant is the next step to­wards this.

Global Catas­trophic Risk In­sti­tute (GCRI)

GCRI is a re­search-ori­ented de­cen­tral­ised think tank, with 1.5 full-time equiv­a­lent paid staff mem­bers, and some vol­un­tary re­search as­so­ci­ates. Their only full-time em­ployee is Seth Baum, with two to three paid part-time em­ploy­ees in­clud­ing their co-founder, Tony Bar­rett. GCRI’s work in­ten­tion­ally spans a wide range of catas­trophic risks, but nu­clear weapons and ar­tifi­cial in­tel­li­gence are two re­cur­ring themes.

They cur­rently have a very small bud­get, with a fore­cast rev­enue of $165k for 2018 (in­clud­ing my dona­tion). This gives them limited run­way: al­though there will be some money lef­tover for 2019 (which can be ex­tended by Seth tak­ing a pay­cut), it will not last more than a year. GCRI hope to raise $500k to cover salaries for their cur­rent em­ploy­ees for the next two years, with lef­tover to hire more sup­port.

Many of the risks in­ves­ti­gated by GCRI are ne­glected, es­pe­cially within the effec­tive al­tru­ism com­mu­nity. Although some risks (e.g. nu­clear se­cu­rity) have at­tracted con­sid­er­able at­ten­tion from other com­mu­ni­ties, I be­lieve it is still worth­while en­gag­ing. For one, GCRI can provide a more long-term fo­cus than other ac­tors. Ad­di­tion­ally, if GCRI does un­cover promis­ing in­ter­ven­tions in this area, they are in a good po­si­tion to com­mu­ni­cate it to the effec­tive al­tru­ism com­mu­nity, shap­ing fu­ture dona­tions and work.

I have gen­er­ally been im­pressed by Seth Baum’s re­search out­put. In par­tic­u­lar, their model for the im­pact and prob­a­bil­ity of nu­clear war is a sub­stan­tial im­prove­ment on any pub­li­cly available mod­els. The model it­self is still fairly crude, but pre­vi­ous work had taken a qual­i­ta­tive ap­proach and not moved far be­yond case stud­ies.

My main crit­i­cism is GCRI’s out­put has in the past some­times em­pha­sised quan­tity over qual­ity. In par­tic­u­lar I would have liked to see GCRI lead­er­ship provide greater men­tor­ship and ex­er­cise more ed­i­to­rial con­trol over the work by GCRI’s as­so­ci­ates. Low-qual­ity re­search can re­flect badly both on GCRI, and on the rest of the field. How­ever, with the re­cent over­haul of GCRI’s af­fili­ates pro­gram, I ex­pect this as­pect to im­prove. Seth in­di­cated that their un­cer­tain fund­ing situ­a­tion has pushed them to­wards keep­ing a steady level of re­search out­put, rather than let­ting ideas ma­ture, so it is also pos­si­ble that hav­ing a greater run­way would alle­vi­ate this is­sue.

It is also un­clear to me whether GCRI should con­tinue to ex­ist as an in­de­pen­dent or­gani­sa­tion, as op­posed to Seth join­ing an or­gani­sa­tion such as CSER or FHI pur­su­ing a similar agenda. Join­ing a larger or­gani­sa­tion could provide more op­por­tu­ni­ties for re­search col­lab­o­ra­tion, and greater op­er­a­tions sup­port. How­ever, Seth be­lieves that re­lo­cat­ing would ham­per his in­ter­ac­tion with US policy com­mu­ni­ties.

Over­all I am mod­er­ately ex­cited about sup­port­ing the work of GCRI and in par­tic­u­lar Seth Baum. I am pes­simistic about room for growth, with re­cruit­ment be­ing a ma­jor challenge, similar to that faced by AI Im­pacts. How­ever, Seth is sig­nifi­cantly more op­ti­mistic than me about the ease of re­cruit­ment, and be­lieves that GCRI’s dis­tributed na­ture will al­low them to hire peo­ple who are un­will­ing to re­lo­cate to work at an­other or­gani­sa­tion do­ing similar work. He notes that re­cruit­ment has never been a fo­cus, since they have not had the fund­ing to grow.

At their cur­rent bud­get level, ad­di­tional fund­ing is a fac­tor for whether Seth con­tinues to work at GCRI full-time. Ac­cord­ingly I would recom­mend dona­tions suffi­cient to en­sure Seth can con­tinue his work. I would en­courage donors to con­sider fund­ing GCRI to scale be­yond this, but to first ob­tain more in­for­ma­tion re­gard­ing their long-term plans and re­cruit­ment strat­egy.

AI Impacts

AI Im­pacts fo­cuses on an im­por­tant re­search area, AI fore­cast­ing, that could sub­stan­tially in­fluence the re­search agen­das of tech­ni­cal AI safety re­searchers and the fo­cus of AI policy. They cur­rently have 2 FTE em­ploy­ees. They have a small bud­get, with re­serves of $70k (as of July 2018) and a fore­cast rev­enue of $480k for 2018. Their fore­cast rev­enue for 2019 is con­sid­er­ably lower, as $350k of the rev­enue in 2018 was in­tended to be used over two years.

The area seems mod­er­ately tractable. I broadly agree with the as­sess­ment of their founder, Katja Grace, that there are a num­ber of tractable low-level ques­tions (e.g. what are the length of ax­ons in the brain?), but it is un­clear how much light they shed on medium-level (e.g. how im­por­tant is com­mu­ni­ca­tion vs com­pute?) and high-level (e.g. how much par­allelism can dis­tributed train­ing ex­ploit?) ques­tions.

I have found Katja’s out­put in the past to be in­sight­ful, so I am ex­cited at en­sur­ing she re­mains funded. Te­gan has less of a track record but based on the out­put so far I be­lieve she is also worth fund­ing. How­ever, I be­lieve AI Im­pacts has ad­e­quate fund­ing for both of their cur­rent em­ploy­ees. Ad­di­tional con­tri­bu­tions would there­fore do a com­bi­na­tion of in­creas­ing their run­way and sup­port­ing new hires.

I am pes­simistic about AI Im­pacts room for growth. This is pri­mar­ily as I view re­cruit­ment in this area be­ing difficult. The ideal can­di­date would be a cross be­tween an OpenPhil re­search an­a­lyst and a tech­ni­cal AI or strat­egy re­searcher. This is a rare skill set with high op­por­tu­nity cost. More­over, AI Im­pacts has had is­sues with em­ployee re­ten­tion, with many in­di­vi­d­u­als that have pre­vi­ously worked leav­ing for other or­gani­sa­tions.

I should note that Katja was sub­stan­tially more op­ti­mistic than me about re­cruit­ment. My view (men­tioned pre­vi­ously in the eval­u­a­tion of ALLFED) that the space is more “or­gani­sa­tion con­strained” than “tal­ent con­strained” also sug­gests re­cruit­ment may be tractable, con­di­tioned on AI Im­pacts be­ing will­ing to ded­i­cate sig­nifi­cant time to re­cruit­ing and men­tor­ing em­ploy­ees.

Given their small size, it’s un­clear to me whether AI Im­pacts should con­tinue to ex­ist as a sep­a­rate or­gani­sa­tion (similar to the con­sid­er­a­tion for GCRI). Their work would be a good fit for FHI, al­though lo­ca­tion prefer­ences are likely to rule out any such merger.

Over­all, the thing hold­ing me off on mak­ing a stronger en­dorse­ment is re­cruit­ment. It’s worth check­ing in to see what hires they are cur­rently con­sid­er­ing: I’d want to sup­port AI Im­pacts if there was some­one they were ex­cited about and they did not have the fund­ing. It might be worth fund­ing them so they have cash on the sidelines in case the right per­son comes along.

Con­flicts of in­ter­est: Katja Grace, the founder of AI Im­pacts, lives in the same house as me. She did not at the time the de­ci­sion was made to recom­mend this grant.

Wild An­i­mal Suffer­ing Re­search (WASR)

Wild An­i­mal Suffer­ing Re­search (WASR) is a non-profit with 2 FTE (1 full-time and 2 part-time) em­ploy­ees, seek­ing to boot­strap re­search into wild an­i­mal suffer­ing. Around 1.5 FTE is spent on in-house re­search, both on fun­da­men­tal top­ics such as welfare eval­u­a­tion and pos­si­ble in­ter­ven­tions. The re­main­der of the time is spent on man­age­ment, and out­reach to aca­demics in re­lated fields.

The pri­mary au­di­ence for their re­search out­put are EAs and an­i­mal ad­vo­cates, and it is writ­ten with the goal of bet­ter guid­ing their own next steps. I have found their re­search a use­ful re­source for bet­ter un­der­stand­ing wild an­i­mal suffer­ing. It tends to­wards dis­till­ing ex­ist­ing re­search, rather than con­duct­ing pri­mary re­search. They may try to get some of their out­put pub­lished in peer-re­viewed venues, but this is not a key goal.

WASR have con­ducted out­reach to aca­demics, and plan to soon offer grants to a small num­ber of aca­demics. They have cre­ated a database of fac­ulty mem­bers work­ing in rele­vant fields, and e-mailed a short­list. When I spoke to them (in early June), they had con­tacted 34 aca­demics and heard back from 12. WASR will need sub­stan­tially more fund­ing to be able to provide grants to more than one or two aca­demics, with $50,000 be­ing the min­i­mum grant size most aca­demics would find use­ful. They have also re­cently ran a small grants com­pe­ti­tion aimed at in­de­pen­dent re­searchers.

WASR has clearly spent sig­nifi­cant time con­sid­er­ing strat­egy, and have been com­mend­ably trans­par­ent, pub­lish­ing their strate­gic and re­search plans. I broadly sup­port their strat­egy, al­though I would fa­vor two mod­ifi­ca­tions. I be­lieve more care should be taken dur­ing out­reach at this early stage. It is hard to know what fram­ing will most en­gage aca­demics or other stake­hold­ers at­ten­tion. If a bad first im­pres­sion is made, it may be hard to re­cover, po­ten­tially ham­per­ing field-build­ing efforts for WASR and other or­gani­sa­tions in the fu­ture. WASR are aware of this and are con­fi­dent they have left a pos­i­tive or neu­tral im­pres­sion. In par­tic­u­lar, they have not re­ceived any di­rect nega­tive feed­back from in­di­vi­d­u­als they have con­tacted. How­ever, we do not know what im­pres­sion the ma­jor­ity of aca­demics who did not re­spond formed.

My sec­ond recom­men­da­tion would be to pri­ori­tize sup­port­ing in­di­vi­d­u­als who can cham­pion this field of re­search from within ex­ist­ing aca­demic or other re­search in­sti­tutes. It is challeng­ing to seed a field as an out­sider, and in cases such as cry­on­ics and nano-tech­nol­ogy this has back­fired. I am un­cer­tain of this recom­men­da­tion, how­ever. In par­tic­u­lar, philan­thropists have a bet­ter track-record of seed­ing fields, so if WASR were to be­come a ma­jor grant-maker in this space this could be a vi­able (albeit ex­pen­sive) method of field-build­ing.

Ad­di­tion­ally, WASR ar­gued that PhD stu­dents in rele­vant fields (such as zo­ol­ogy or bioethics) have limited au­ton­omy, so a ju­nior re­searcher might have lit­tle abil­ity to work on rele­vant top­ics. In par­tic­u­lar, they be­lieve that merely pro­vid­ing fund­ing for PhD stu­dents is un­likely to be suffi­cient, un­less there is buy-in from se­nior re­searchers. Ac­cord­ingly, they fa­vor a more long-term re­la­tion­ship build­ing ap­proach. This seems plau­si­ble to me, but my hunch is that a PhD stu­dent in­side academia might still be bet­ter placed to build these re­la­tion­ships than an ex­ter­nal or­gani­sa­tion. My in­tu­ition here is heav­ily in­fluenced by the norms in my own re­search field, Com­puter Science, where PhD stu­dents can have sub­stan­tial aca­demic free­dom if they have an ex­ter­nal fel­low­ship and an open-minded ad­viser. This may be less true in other fields.

Over­all I think WASR is a well-run or­gani­sa­tion with a clear strat­egy and a short but en­courag­ing track record. I would en­courage those with a near-term an­i­mal welfare cen­tric wor­ld­view to sup­port them. Un­der my own wor­ld­view, I did not find them com­pet­i­tive with the other or­gani­sa­tions, and so recom­mended a small grant of $5,000.