Effective Altruism Foundation: Plans for 2020

Summary

  • Our mis­sion. We are build­ing a global com­mu­nity of re­searchers and pro­fes­sion­als work­ing on re­duc­ing risks of as­tro­nom­i­cal suffer­ing (s-risks).

  • Our plans for 2020

    • Re­search. We aim to in­ves­ti­gate the ques­tions listed in our re­search agenda ti­tled “Co­op­er­a­tion, Con­flict, and Trans­for­ma­tive Ar­tifi­cial In­tel­li­gence” and other ar­eas.

    • Re­search com­mu­nity. We plan to host re­search work­shops, make grants to sup­port work rele­vant to our pri­ori­ties, pre­sent our work to other re­search groups, and ad­vise peo­ple who are in­ter­ested in re­duc­ing s-risks in their ca­reers and re­search pri­ori­ties.

    • Re­brand­ing. We plan to re­brand from “Effec­tive Altru­ism Foun­da­tion” to a name that bet­ter fits our new strat­egy.

  • 2019 review

    • Re­search. In 2019, we mainly worked on s-risks as a re­sult of con­flicts in­volv­ing ad­vanced AI sys­tems.

    • Re­search work­shops. We ran re­search work­shops on s-risks from AI in Ber­lin, the San Fran­cisco Bay Area, and near Lon­don. The par­ti­ci­pants gave pos­i­tive feed­back.

    • Lo­ca­tion. We moved to Lon­don (Prim­rose Hill) to at­tract and re­tain staff bet­ter and col­lab­o­rate with other re­searchers in Lon­don and Oxford.

  • Fundrais­ing tar­get. We aim to raise $185,000 (stretch goal: $700,000). If you pri­ori­tize re­duc­ing s-risks, there is a strong case for sup­port­ing us. Make a dona­tion.

About us

We are build­ing a global com­mu­nity of re­searchers and pro­fes­sion­als work­ing on re­duc­ing risks of as­tro­nom­i­cal suffer­ing (s-risks). (Read more about us and our val­ues.)

We are a Lon­don-based non­profit. Pre­vi­ously, we were lo­cated in Switzer­land (Basel) and Ger­many (Ber­lin). Be­fore shift­ing our fo­cus to s-risks from ar­tifi­cial in­tel­li­gence (AI), we im­ple­mented pro­jects in global health and de­vel­op­ment, farm an­i­mal welfare, wild an­i­mal welfare, and effec­tive al­tru­ism (EA) com­mu­nity build­ing and fundrais­ing.

Back­ground on our strategy

For an overview of our strate­gic think­ing, see the fol­low­ing pieces:

The best work on re­duc­ing s-risks cuts across a broad range of aca­demic dis­ci­plines and in­ter­ven­tions. Our re­cent re­search agenda, for in­stance, draws from com­puter sci­ence, eco­nomics, poli­ti­cal sci­ence, and philos­o­phy. That means (a) we must work in many differ­ent dis­ci­plines and (b) find peo­ple who can bridge dis­ci­plinary bound­aries. The longter­mism com­mu­nity brings to­gether peo­ple with di­verse back­grounds who un­der­stand our pri­ori­ti­za­tion and share it to some ex­tent. For this rea­son, we fo­cus on mak­ing re­duc­ing s-risks a well-es­tab­lished pri­or­ity in that com­mu­nity.

Strate­gic goals

In­spired by GiveWell’s self-eval­u­a­tions, we are track­ing our progress with a set of de­liber­ately vague perfor­mance ques­tions:

  1. Build­ing long-term ca­pac­ity. Have we made progress to­wards be­com­ing a re­search group that will have an out­sized im­pact on the re­search land­scape and rele­vant ac­tors shap­ing the fu­ture?

  2. Re­search progress. Has our work re­sulted in re­search progress that helps re­duce s-risks (both in-house and el­se­where)?

  3. Re­search dis­sem­i­na­tion. Have we com­mu­ni­cated our re­search to our tar­get au­di­ence, and has the tar­get au­di­ence en­gaged with our ideas?

  4. Or­ga­ni­za­tional health. Are we a healthy or­ga­ni­za­tion with an effec­tive board, staff in ap­pro­pri­ate roles, ap­pro­pri­ate eval­u­a­tion of our work, re­li­able poli­cies and pro­ce­dures, ad­e­quate fi­nan­cial re­serves and re­port­ing, and so forth?

Our team will an­swer these ques­tions at the end of 2020.

Plans for 2020

Research

Note: We cur­rently carry out some of our re­search as part of the Foun­da­tional Re­search In­sti­tute (FRI). We plan to con­soli­date our ac­tivi­ties re­lated to s-risks un­der one brand and web­site in early 2020.

We aim to in­ves­ti­gate re­search ques­tions listed in our re­search agenda ti­tled “Co­op­er­a­tion, Con­flict, and Trans­for­ma­tive Ar­tifi­cial In­tel­li­gence.” We ex­plain our fo­cus on co­op­er­a­tion and con­flict in the pref­ace:

“S-risks might arise by malev­olence, by ac­ci­dent, or in the course of con­flict. (…) We be­lieve that s-risks aris­ing from con­flict are among the most im­por­tant, tractable, and ne­glected of these. In par­tic­u­lar, strate­gic threats by pow­er­ful AI agents or AI-as­sisted hu­mans against al­tru­is­tic val­ues may be among the largest sources of ex­pected suffer­ing. Strate­gic threats have his­tor­i­cally been a source of sig­nifi­cant dan­ger to civ­i­liza­tion (the Cold War be­ing a prime ex­am­ple). And the po­ten­tial down­sides from such threats, in­clud­ing those in­volv­ing large amounts of suffer­ing, may in­crease sig­nifi­cantly with the emer­gence of trans­for­ma­tive AI sys­tems.”

Topics cov­ered by our re­search agenda in­clude:

  • AI strat­egy and gov­er­nance. What does the strate­gic land­scape at the time of trans­for­ma­tive AI (TAI) de­vel­op­ment look like? For ex­am­ple, will it be unipo­lar or mul­ti­po­lar, and how will offen­sive and defen­sive ca­pa­bil­ities scale? What does this im­ply for co­op­er­a­tion failures? How can we shape the gov­er­nance of AI to re­duce the chances of catas­trophic co­op­er­a­tion failures?

  • Cred­i­bil­ity. What might the na­ture of cred­ible com­mit­ment among TAI sys­tems look like, and what are the im­pli­ca­tions for im­prov­ing co­op­er­a­tion? Can we de­velop new the­o­ries (e.g., pro­gram equil­ibrium) to ac­count for rele­vant fea­tures of AI?

  • Peace­ful bar­gain­ing mechanism. Can we fur­ther de­velop bar­gain­ing mechanisms that do not lead to de­struc­tive con­flict (e.g., by im­ple­ment­ing sur­ro­gate goals)?

  • Con­tem­po­rary AI ar­chi­tec­tures. How can we make progress on re­duc­ing co­op­er­a­tion failures us­ing con­tem­po­rary AI tools (e.g., learn­ing to solve so­cial dilem­mas among deep re­in­force­ment learn­ers)?

  • Hu­mans in the loop. How do we ex­pect hu­man over­seers or op­er­a­tors of AI sys­tems to be­have in in­ter­ac­tions be­tween hu­mans and AI sys­tems?

  • Foun­da­tions of ra­tio­nal agency, in­clud­ing bounded de­ci­sion-mak­ing and acausal rea­son­ing.

We did not list some top­ics in the re­search agenda be­cause they did not fit its scope, but we con­sider them very im­por­tant:

  • macros­trat­egy re­search on ques­tions re­lated to s-risk,

  • non­tech­ni­cal work on strate­gic threats,

  • re­duc­ing the like­li­hood of s-risks from ha­tred, sadism, and other kinds of malev­olence,

  • re­search on whether and how we should ad­vo­cate rights for (sen­tient) digi­tal minds,

  • re­duc­ing po­ten­tial risks from ge­netic en­hance­ment (es­pe­cially in the con­text of TAI de­vel­op­ment),

  • AI strat­egy top­ics not cap­tured by the re­search agenda (e.g., near misses),

  • AI gov­er­nance top­ics not cap­tured by the re­search agenda (e.g., the gov­er­nance of digi­tal minds),

  • foun­da­tional ques­tions rele­vant to s-risk (e.g., metaethics, pop­u­la­tion ethics, and the fea­si­bil­ity and moral rele­vance of ar­tifi­cial con­scious­ness), and

  • other po­ten­tially rele­vant ar­eas (e.g., great power con­flict, space gov­er­nance, or pro­mot­ing co­op­er­a­tion).

In prac­tice, our pub­li­ca­tions and grants will be de­ter­mined to a large ex­tent by the ideas and mo­ti­va­tion of the re­searchers. We un­der­stand the above list of top­ics as a menu for re­searchers to choose from, and we ex­pect that our ac­tual work will only cover a small por­tion of the rele­vant is­sues. We hope to col­lab­o­rate with other AI safety re­search groups on some of these top­ics.

We are look­ing to grow our re­search team, so we would be ex­cited to hear from you if you think you might be a good fit! We are also con­sid­er­ing run­ning a hiring round based on our re­search agenda as well as a sum­mer re­search fel­low­ship.

Re­search community

We aim to de­velop a global re­search com­mu­nity, pro­mot­ing reg­u­lar ex­change and co­or­di­na­tion be­tween re­searchers whose work con­tributes to re­duc­ing s-risks.

  • Re­search work­shops. Our pre­vi­ous work­shops were at­tended by re­searchers from ma­jor AI labs and aca­demic re­search groups. They re­sulted in sev­eral re­searchers be­com­ing more in­volved with re­search rele­vant to s-risks. We plan to con­tinue to host re­search work­shops near Lon­don and in the San Fran­cisco Bay Area. Be­sides, we might host sem­i­nars at other re­search groups and ex­plore the idea of host­ing a re­treat on moral re­flec­tion.

  • Re­search agenda dis­sem­i­na­tion. We plan to reach out proac­tively to re­searchers who may be in­ter­ested in work­ing on our agenda. We plan to pre­sent the agenda at sev­eral re­search or­ga­ni­za­tions, on pod­casts, and at EA Global San Fran­cisco. We may also pub­lish a com­ple­men­tary overview of re­search ques­tions fo­cused on macros­trat­egy and s-risks from causes other than con­flict in­volv­ing AI sys­tems.

  • Grant­mak­ing. We will con­tinue to sup­port work rele­vant to re­duc­ing s-risks through the EAF Fund. We plan to run at least one open grant ap­pli­ca­tion round. If we have suffi­cient ca­pac­ity, we plan to ex­plore more ac­tive forms of grant­mak­ing, such as reach­ing out to aca­demic re­searchers, lay­ing the ground­work for set­ting up an aca­demic re­search in­sti­tute, or work­ing closely with in­di­vi­d­u­als who could launch valuable pro­jects.

  • Com­mu­nity co­or­di­na­tion. We see sub­stan­tial benefits from bring­ing the ex­is­ten­tial-risk-ori­ented (x-risk-ori­ented) and s-risk-ori­ented parts of the longter­mism com­mu­nity closer to­gether. We be­lieve that con­cern for s-risks should be a core com­po­nent of longter­mist EA, so we will con­tinue to en­courage x-risk-ori­ented groups and au­thors to con­sider s-risks in their key con­tent and think­ing. We will also con­tinue to sug­gest to suffer­ing-fo­cused EAs that they con­sider po­ten­tial risks to peo­ple with other value sys­tems in their pub­li­ca­tions (see be­low). We plan to re­assess to what ex­tent EAF should con­tinue to have a co­or­di­nat­ing role in the longter­mist EA com­mu­nity at the end of 2020.

  • Ad­vis­ing and in-per­son ex­change. In the past, in-per­son ex­change has been an im­por­tant step for helping com­mu­nity mem­bers bet­ter un­der­stand our pri­ori­ties and be­come more in­volved with our work. We will con­tinue to ad­vise peo­ple who are in­ter­ested in re­duc­ing s-risks in their ca­reers and re­search pri­ori­ties. Next year, we might ex­per­i­ment with reg­u­lar mee­tups and co-work­ing at our offices.

Other activities

  • Rais­ing for Effec­tive Giv­ing (REG). We will con­tinue to fundraise from pro­fes­sional poker play­ers for EA char­i­ties, in­clud­ing a sig­nifi­cant per­centage for longter­mist or­ga­ni­za­tions. Be­cause fundrais­ing for oth­ers does not di­rectly con­tribute to our main pri­ori­ties, and it is difficult to scale REG fur­ther, we plan to main­tain REG but not ex­pand it fur­ther.

  • Re­grant­ing. We cur­rently en­able Ger­man, Swiss, and Dutch donors to deduct their dona­tions from their taxes when giv­ing to EA char­i­ties around the world, lead­ing to around $400,000 in ad­di­tional coun­ter­fac­tual dona­tions per year. Be­cause this pro­ject does not fur­ther our main strate­gic goals, we are ex­plor­ing ways of hand­ing it over to a suc­ces­sor who can fur­ther im­prove our cur­rent ser­vice.

Or­ga­ni­za­tional op­por­tu­ni­ties and challenges

  • Re­brand­ing. We will likely re­brand the Foun­da­tional Re­search In­sti­tute (FRI) and stop us­ing the Effec­tive Altru­ism Foun­da­tion (EAF) brand (ex­cept as the name of our le­gal en­tities). We ex­pect to an­nounce our new brand in Jan­uary. We are mak­ing this change for the fol­low­ing rea­sons:

    • we per­ceive the FRI brand as too grandiose and con­fus­ing given the scope and na­ture of our re­search, and have re­ceived un­prompted nega­tive feed­back to this effect;

    • we do not want to use the EAF brand be­cause it does not de­scribe our ac­tivi­ties well and is eas­ily con­fused with the Cen­tre for Effec­tive Altru­ism (CEA), es­pe­cially af­ter our move to the UK.

  • Re­search office. We ex­pect some of our re­mote re­searchers to join us at our offices in Lon­don some­time next year. We also hope to hire more re­searchers.

  • Lead re­searcher. Our re­search team cur­rently lacks a lead re­searcher with aca­demic ex­pe­rience and man­age­ment skills. We hope that Jesse Clif­ton will take on this role in mid-2020.

Re­view of 2019

Research

S-risks from con­flict. In 2019, we mainly worked on s-risks as a re­sult of con­flicts in­volv­ing ad­vanced AI sys­tems:

We also cir­cu­lated nine in­ter­nal ar­ti­cles and work­ing pa­pers with the par­ti­ci­pants of our re­search work­shops.

Foun­da­tional work on de­ci­sion the­ory. This work might be rele­vant in the con­text of acausal in­ter­ac­tions (see the last sec­tion of the re­search agenda):

Mis­cel­la­neous pub­li­ca­tions:

Re­search community

  • Re­search work­shops. We ran three re­search work­shops on s-risks from AI. They im­proved our pri­ori­ti­za­tion, helped us de­velop our re­search agenda, and in­formed the fu­ture work of some par­ti­ci­pants:

    • “S-risk re­search work­shop,” Ber­lin, 2 days, March 2019, with ju­nior re­searchers.

    • “Prevent­ing dis­value from AI,” San Fran­cisco Bay Area, 2.5 days, May 2019, with 21 AI safety and AI strat­egy re­searchers from lead­ing in­sti­tutes and AI labs (in­clud­ing Deep­Mind, OpenAI, MIRI, FHI). Par­ti­ci­pants rated the con­tent at 4.3 out of 5 and the lo­gis­tics at 4.5 out of 5 (weighted av­er­age). They said at­tend­ing the event was about 4x as valuable as what they would have been do­ing oth­er­wise (weighted ge­o­met­ric mean).

    • “S-risk re­search work­shop,” near Lon­don, 3 days, Novem­ber 2019, with a mix­ture of ju­nior and more ex­pe­rienced re­searchers.

    • We have de­vel­oped the ca­pac­ity to host re­search work­shops with con­sis­tently good qual­ity.

  • Grant­mak­ing through the EAF Fund. We ran our first ap­pli­ca­tion round and made six grants worth $221,306 in to­tal. Another $600,000 is available in the fund that we could not dis­burse so far (in part be­cause we had planned to hire a Re­search An­a­lyst for our grant­mak­ing but were un­able to fill the po­si­tion).

  • Com­mu­nity co­or­di­na­tion. We worked to bring the x-risk-ori­ented and s-risk-ori­ented parts of the longter­mism com­mu­nity closer to­gether. We be­lieve this will re­sult in syn­er­gies in AI safety and AI gov­er­nance re­search and policy and per­haps also in macros­trat­egy re­search and broad longter­mist in­ter­ven­tions.

    • Back­ground. Un­til 2018, there had been lit­tle col­lab­o­ra­tion be­tween the x-risk-ori­ented and s-risk-ori­ented parts of the longter­mism com­mu­nity, de­spite the over­lap in philo­soph­i­cal views and cause ar­eas (es­pe­cially AI risk). For this rea­son, our work on s-risks re­ceived less en­gage­ment than it could have. Over the past four years, we worked hard to bridge this di­vide. For in­stance, we re­peat­edly sought feed­back from other com­mu­nity mem­bers. In re­sponse to that feed­back, we de­cided to fo­cus less on pub­lic moral ad­vo­cacy and more on re­search on re­duc­ing s-risks (which we con­sider more press­ing any­way) and en­couraged other s-risk-ori­ented com­mu­nity mem­bers to do so as well. We also vis­ited other re­search groups to in­crease their en­gage­ment with our work.

    • Com­mu­ni­ca­tion guidelines. This year, we fur­ther ex­panded these efforts. We worked with Nick Beck­stead, then Pro­gram Officer for effec­tive al­tru­ism at the Open Philan­thropy Pro­ject, to de­velop a set of com­mu­ni­ca­tion guidelines for dis­cussing as­tro­nom­i­cal stakes:

      • Nick’s guidelines recom­mend high­light­ing be­liefs and pri­ori­ties that are im­por­tant to the s-risk-ori­ented com­mu­nity. We are ex­cited about these guidelines be­cause we ex­pect them to re­sult in more con­tri­bu­tions by out­side ex­perts to our re­search (at our work­shops and on an on­go­ing ba­sis) and a bet­ter rep­re­sen­ta­tion of s-risks in the most pop­u­lar EA con­tent (see, e.g., the 80,000 Hours job board and pre­vi­ous ed­its to “The Long-Term Fu­ture”).

      • EAF’s guidelines recom­mend com­mu­ni­cat­ing in a more nu­anced man­ner about pes­simistic views of the long-term fu­ture by con­sid­er­ing high­light­ing moral co­op­er­a­tion and un­cer­tainty, fo­cus­ing more on prac­ti­cal ques­tions if pos­si­ble, and an­ti­ci­pat­ing po­ten­tial mi­s­un­der­stand­ings and mis­rep­re­sen­ta­tions. We see it as our re­spon­si­bil­ity to en­sure that those who come to pri­ori­tize s-risks based on our writ­ings will also share our co­op­er­a­tive ap­proach and com­mit­ment against vi­o­lence. We ex­pect the guidelines to re­duce that risk and to re­sult in in­creased in­ter­est in s-risks by ma­jor fun­ders (in­clud­ing the Open Philan­thropy Pro­ject’s grant, see be­low). We ex­pect both guidelines to con­tribute to a more bal­anced dis­cus­sion about the long-term fu­ture.

      • Nick put in a sub­stan­tial effort to en­sure his guidelines are read and en­dorsed by large parts of the com­mu­nity. Similarly, we reached out to the most ac­tive au­thors and sent our guidelines to them. Some com­mu­nity mem­bers sug­gested that these guidelines should be trans­par­ent to the com­mu­nity; we agree with them and are, there­fore, shar­ing them pub­li­cly.

    • Longer-term plans. We be­lieve that these ac­tivi­ties are only the be­gin­ning of longer and deeper col­lab­o­ra­tions. We plan to re­assess the costs and benefits at the end of 2020.

  • Re­search com­mu­nity.

    • We ad­vised 13 po­ten­tial re­searchers and pro­fes­sion­als in­ter­ested in s-risks in their ca­reers.

    • We sent out our first re­search newslet­ter to about 70 re­searchers.

    • We started pro­vid­ing schol­ar­ships and more sys­tem­atic op­er­a­tions sup­port for re­searchers.

    • We im­proved our on­line com­mu­ni­ca­tion plat­form for re­searchers (Slack workspace with sev­eral chan­nels) and have re­ceived pos­i­tive feed­back on the dis­cus­sion qual­ity.

  • Re­search man­age­ment. We pub­lished a re­port on dis­rup­tive re­search groups. The main learn­ings for us were: (1) We should se­ri­ously con­sider how to ad­dress our lack of re­search lead­er­ship and (2) we should im­prove the phys­i­cal prox­im­ity of our re­search staff.

Or­ga­ni­za­tional updates

  • We moved to Lon­don. We re­lo­cated our head­quar­ters from Ber­lin to Lon­don be­cause this al­lows us to at­tract and re­tain staff bet­ter and col­lab­o­rate with other re­searchers and EA or­ga­ni­za­tions in Lon­don and Oxford. Our team of six will work from our offices in Prim­rose Hill, Lon­don.

  • Hiring. We have hired Jesse Clif­ton to join our re­search team part-time. Jesse is pur­su­ing a PhD in statis­tics at NCSU and is the pri­mary au­thor of our tech­ni­cal re­search agenda.

  • Open Philan­thropy Pro­ject grant. The Open Philan­thropy Pro­ject awarded us a $1 mil­lion grant over two years to sup­port our re­search, gen­eral op­er­a­tions, and grant­mak­ing.

  • Strate­gic clar­ity. At the end of 2018, we were still sub­stan­tially un­cer­tain about the strate­gic goals of our or­ga­ni­za­tion. We have since re­fined our mis­sion and strat­egy and have over­hauled our web­site ac­cord­ingly.

Other activities

  • We dou­bled Zurich’s de­vel­op­ment co­op­er­a­tion and made it more effec­tive. Thanks to a bal­lot ini­ti­a­tive launched by EAF, the city of Zurich’s de­vel­op­ment co­op­er­a­tion bud­get is in­creas­ing from $3 mil­lion to $8 mil­lion per year and al­lo­cated “based on the available sci­en­tific re­search on effec­tive­ness and cost-effec­tive­ness.” This ap­pears to be the first time that Swiss leg­is­la­tion on de­vel­op­ment co­op­er­a­tion men­tions effec­tive­ness re­quire­ments. See EA Fo­rum ar­ti­cle: EAF’s bal­lot ini­ti­a­tive dou­bled Zurich’s de­vel­op­ment aid.

  • Fundrais­ing from pro­fes­sional poker play­ers (Rais­ing for Effec­tive Giv­ing). In 2018, we raised $5,160,435 for high-im­pact char­i­ties to which the poker play­ers would oth­er­wise not have donated (mainly thanks to our fundrais­ing efforts in pre­vi­ous years). After sub­tract­ing ex­penses and op­por­tu­nity costs, the net im­pact was $4,941,930. About 34% of the to­tal went to longter­mist char­i­ties. We ex­pect al­most as good re­sults in 2019. We dropped pre­vi­ous plans to reach out to wealthy in­di­vi­d­u­als and provide them with philan­thropic ad­vice.

  • Tax de­ductibil­ity for Ger­man, Swiss, and Dutch donors. We re­granted $2,494,210 in tax-de­ductible dona­tions to other high-im­pact char­i­ties, lead­ing to an es­ti­mated $400,000 in con­tri­bu­tions that the donors would not have made oth­er­wise. Ac­count­ing for ex­penses and op­por­tu­nity costs, the net im­pact was small ($57,851), though this ig­nores benefits from get­ting donors in­volved with EA. We ex­pect similar re­sults in 2019. We are ex­plor­ing ways of hand­ing this pro­ject over to a suc­ces­sor.

  • In Jan­uary, Wild-An­i­mal Suffer­ing Re­search merged with Utility Farm to form the Wild An­i­mal Ini­ti­a­tive. As part of this pro­cess, this pro­ject be­came fully in­de­pen­dent from us. We wish them all the best with their efforts!

  • Swiss bal­lot ini­ti­a­tive for a ban on fac­tory farm­ing. Sen­tience Poli­tics, a spin-off of ours, suc­cess­fully col­lected the 100,000 sig­na­tures re­quired to launch a bind­ing bal­lot ini­ti­a­tive in Switzer­land. The ini­ti­a­tive de­mands a ban on the pro­duc­tion and im­port of an­i­mal prod­ucts that do not meet cur­rent or­ganic meat pro­duc­tion stan­dards. We ex­pect the ini­ti­a­tive to come to the bal­lot in 2023. Sur­veys sug­gest that the ini­ti­a­tive has a non­neg­ligible chance (per­haps 1–10%) of pass­ing. Much of the ground­work for the ini­ti­a­tive was laid at a time when Sen­tience Poli­tics was still part of EAF.

Mis­takes and les­sons learned

  • Re­search out­put. While we were satis­fied with our in­ter­nal drafts, we fell short on our goals to pro­duce writ­ten re­search out­put (for pub­li­ca­tion, or at least for shar­ing with peers).

  • Hand­ing over com­mu­nity build­ing in Ger­many. As planned, we handed off our com­mu­nity-build­ing work in the Ger­man-speak­ing area to CEA and EA lo­cal groups. In Au­gust, we re­al­ized that we could have done more to en­sure a smooth tran­si­tion for the na­tional-level co­or­di­na­tion of the com­mu­nity in Ger­many. As a re­sult, we ded­i­cated some ad­di­tional re­sources to this in the sec­ond half of this year and im­proved our gen­eral heuris­tics for hand­ing over pro­jects to suc­ces­sors.

  • Feed­back and trans­parency for our com­mu­ni­ca­tion guidelines. We did not seek feed­back on the guidelines as sys­tem­at­i­cally as we now think we should have. As a re­sult, some peo­ple in our net­work were dis­satis­fied with the out­come. More­over, while we were plan­ning to give a gen­eral up­date on our efforts in our end-of-year up­date, we now be­lieve it would have been worth the time to pub­lish the full guidelines sooner.

  • Hiring. We planned to hire a Re­search An­a­lyst for grant­mak­ing and an Oper­a­tions An­a­lyst and made two job offers. One of them was not ac­cepted; the other one did not work out dur­ing the first few months of em­ploy­ment. In hind­sight, it might have been bet­ter to hire even more slowly and en­sure we un­der­stood the roles we were hiring for bet­ter. Do­ing so would have al­lowed us to make a more con­vinc­ing case for the po­si­tions and hire from a larger pool of can­di­dates.

  • An­ti­ci­pat­ing im­pli­ca­tions of strate­gic changes. When we de­cided to shift our strate­gic fo­cus to­wards re­search on s-risks, we were in­suffi­ciently aware of how this would change ev­ery­one’s daily work and re­spon­si­bil­ities. We now think we could have an­ti­ci­pated these changes more proac­tively and taken mea­sures to make the tran­si­tion eas­ier for our staff.

  • Strate­gic plan­ning pro­ce­dure. Due to re­peated or­ga­ni­za­tional changes over the past years, we had not de­vel­oped a re­li­able an­nual strate­gic plan­ning rou­tine. This year, we did not re­al­ize that build­ing such a pro­cess is im­por­tant. We plan to pri­ori­tize this in 2020.

  • Com­mu­ni­cat­ing our move to Lon­don. We did not com­mu­ni­cate our de­ci­sion to re­lo­cate from Ber­lin to Lon­don very care­fully in some in­stances. As a re­sult, we re­ceived some nega­tive feed­back from peo­ple who did not sup­port our de­ci­sion and were un­der the im­pres­sion we had not thought care­fully about it. We in­vested some time to provide more back­ground on our rea­son­ing.

Financials

  • Bud­get 2020: $994,000 (7.4 ex­pected full-time equiv­a­lent em­ploy­ees). Our per-staff ex­penses have in­creased com­pared with 2019 be­cause we do not have ac­cess to free office space any­more, and the cost of liv­ing in Lon­don is sig­nifi­cantly higher than in Ber­lin.

  • EAF re­serves as of early Novem­ber: $1,305,000 (cor­re­sponds to 15 months of ex­penses; ex­clud­ing EAF Fund bal­ance).

  • EAF Fund bal­ance as of mid-De­cem­ber: $600,000.

  • Room for more fund­ing: $185,000 (to at­tain 18 months of re­serves); stretch goal: $700,000 (to at­tain 24 months of re­serves).

  • We in­vest funds that we are un­likely to de­ploy soon in the global stock mar­ket as per our in­vest­ment policy.

(View full-size image.)

How to contribute

  • Stay up to date. Sub­scribe to our sup­porter up­dates and fol­low our Face­book page.

  • Work with us. We are always hiring re­searchers and might also hire for new po­si­tions in re­search op­er­a­tions and man­age­ment. If you are in­ter­ested, we would be very ex­cited to hear from you!

  • Get ca­reer ad­vice. If you are in­ter­ested in our pri­ori­ties, we are happy to dis­cuss your ca­reer plans with you. Sched­ule a call now.

  • En­gage with our re­search. If you are in­ter­ested in dis­cussing our re­search with our team and giv­ing feed­back on in­ter­nal drafts, please reach out to Ste­fan Torges.

  • Make a dona­tion. We aim to raise $185,000 (stretch goal: $700,000) for EAF. (We can set up a donor-ad­vised fund (DAF) for value-al­igned donors who give at least $100,000 over two years.)

Recom­men­da­tion for donors

We think it makes sense for donors to sup­port us if:

  1. you be­lieve we should pri­ori­tize in­ter­ven­tions that af­fect the long-term fu­ture pos­i­tively,

  2. (a) you as­sign sig­nifi­cant cre­dence to some form of suffer­ing-fo­cused ethics, (b) you think s-risks are not un­likely com­pared to very pos­i­tive fu­ture sce­nar­ios, and/​or (c) you think work on s-risks is par­tic­u­larly ne­glected and rea­son­ably tractable, and

  3. you as­sign sig­nifi­cant cre­dence that our pri­ori­ti­za­tion and strat­egy is sound, i.e., you con­sider our work on AI and/​or non-AI pri­ori­ties suffi­ciently press­ing (e.g., you as­sign a non­triv­ial prob­a­bil­ity (at least 5–10%) to the de­vel­op­ment of trans­for­ma­tive AI within the next 20 years).

For donors who do not agree with these points, we recom­mend giv­ing to the donor lot­tery (or the EA Funds). We recom­mend that donors who are in­ter­ested in the EAF Fund sup­port EAF in­stead be­cause the EAF Fund has a limited ca­pac­ity to ab­sorb fur­ther fund­ing.

Would you like to sup­port us? Make a dona­tion.

We are in­ter­ested in your feedback

If you have any ques­tions or com­ments, we look for­ward to hear­ing from you; you can also send us feed­back anony­mously. We greatly ap­pre­ci­ate any thoughts that could help us im­prove our work. Thank you!

Acknowledgments

I would like to thank To­bias Bau­mann, Max Daniel, Ruairi Don­nelly, Lukas Gloor, Chi Nguyen, and Ste­fan Torges for giv­ing feed­back on this ar­ti­cle.