Effective Altruism Foundation: Plans for 2019

By Ste­fan Torges and Jonas Vollmer

Summary

  • Re­search: We plan to con­tinue our re­search in the ar­eas of AI-re­lated de­ci­sion the­ory and bar­gain­ing, fail-safe mea­sures, and macros­trat­egy.

  • Re­search co­or­di­na­tion: We plan to host a re­search work­shop fo­cused on pre­vent­ing dis­value from AI, pub­lish an up­dated re­search agenda, and con­tinue our sup­port and coach­ing of in­de­pen­dent re­searchers and or­ga­ni­za­tions.

  • Grant­mak­ing: We plan to grow our grant­mak­ing ca­pac­ity by ex­pand­ing our team with a ded­i­cated grant­mak­ing re­searcher.

  • Fundrais­ing for other char­i­ties: We will con­tinue to fundraise sev­eral mil­lion dol­lars per year for effec­tive char­i­ties, but ex­pand­ing these ac­tivi­ties will not be a pri­or­ity for us next year.

  • Hand­ing off com­mu­nity build­ing: We will trans­fer most of our com­mu­nity-build­ing work in the Ger­man-speak­ing area to CEA, LEAN, and EA lo­cal groups.

  • Fundrais­ing tar­get: We aim to raise $400,000 by the end of 2018. If you pri­ori­tize re­duc­ing s-risks, there is a strong case for sup­port­ing us. Make a dona­tion.

Table of contents

  • About the Effec­tive Altru­ism Foun­da­tion (EAF)

  • Plans for 2019

    • Re­search (Foun­da­tional Re­search In­sti­tute – FRI)

    • Re­search coordination

    • Grantmaking

    • Other activities

  • Financials

  • When does it make sense to sup­port our work?

  • Brief re­view of 2018

    • Or­ga­ni­za­tional updates

    • Achievements

    • Mistakes

    • We are in­ter­ested in your feedback

About the Effec­tive Altru­ism Foun­da­tion (EAF)

We con­duct and co­or­di­nate re­search on how to do the most good in terms of re­duc­ing suffer­ing, and sup­port work that con­tributes to­wards this goal. To­gether with oth­ers in the effec­tive al­tru­ism com­mu­nity, we want care­ful eth­i­cal re­flec­tion to guide the fu­ture of our civ­i­liza­tion. We cur­rently fo­cus on efforts to re­duce the worst risks of as­tro­nom­i­cal suffer­ing (s-risks) from ad­vanced ar­tifi­cial in­tel­li­gence. (More about our mis­sion and pri­ori­ties.)

Plans for 2019

Re­search (Foun­da­tional Re­search In­sti­tute – FRI)

We plan to con­tinue our re­search in the ar­eas of AI-re­lated de­ci­sion the­ory and bar­gain­ing (e.g., im­plied de­ci­sion the­o­ries of differ­ent AI ar­chi­tec­tures), fail-safe mea­sures (e.g., sur­ro­gate goals), and macros­trat­egy. We would like to make progress in the fol­low­ing ar­eas in par­tic­u­lar:

  • Prevent­ing con­flicts in­volv­ing AI sys­tems. We want to learn more about how de­ci­sion the­ory re­search and AI gov­er­nance can con­tribute to more co­op­er­a­tive AI out­comes.

  • AI al­ign­ment ap­proaches. We want to bet­ter un­der­stand var­i­ous AI al­ign­ment ap­proaches, how they af­fect the like­li­hood and scope of differ­ent failure modes, and what we can do to make them more ro­bust.

We are look­ing to grow our re­search team in 2019, so we would be ex­cited to hear from you if you think you might be a good fit!

Re­search coordination

Aca­demic in­sti­tu­tions, the AI in­dus­try, and other EA or­ga­ni­za­tions fre­quently provide ex­cel­lent en­vi­ron­ments for re­search in the ar­eas men­tioned above. Since EAF cur­rently can­not provide such an en­vi­ron­ment, we aim to act as a global re­search net­work, pro­mot­ing the reg­u­lar ex­change and co­or­di­na­tion be­tween re­searchers whose work con­tributes to re­duc­ing s-risks.

  • EAF re­search re­treat: pre­vent­ing dis­value from AI. As a first step, we will host an AI al­ign­ment re­search work­shop with a fo­cus on s-risks from AI and seek out feed­back on our re­search from do­main ex­perts.

  • Re­search agenda. We plan to pub­lish a re­search agenda to fa­cil­i­tate more in­de­pen­dent re­search.

  • Ad­vice. We will con­tinue to offer sup­port and ad­vice, in par­tic­u­lar with ca­reer plan­ning, to in­di­vi­d­u­als ded­i­cated to con­tribut­ing to our re­search pri­ori­ties—be it at EAF or or­ga­ni­za­tions or in­sti­tu­tions work­ing in high-pri­or­ity ar­eas.

  • Oper­a­tions sup­port. We will ex­per­i­ment with pro­vid­ing op­er­a­tional sup­port to in­di­vi­d­ual re­searchers. This may be a par­tic­u­larly cost-effec­tive way to in­crease re­search ca­pac­ity.

The value of these ac­tivi­ties de­pends to some ex­tent on how many in­de­pen­dent re­searchers are qual­ified and mo­ti­vated to work on ques­tions we would like to see progress on. We will re-as­sess af­ter 6 and 12 months.

Grantmaking

The EAF Fund will sup­port in­di­vi­d­u­als (stu­dents, aca­demics, and in­de­pen­dent re­searchers) and or­ga­ni­za­tions (re­search in­sti­tutes and char­i­ties) to carry out re­search in the ar­eas of de­ci­sion the­ory and bar­gain­ing, AI al­ign­ment and fail-safe ar­chi­tec­tures, macros­trat­egy re­search, and AI gov­er­nance. To iden­tify promis­ing fund­ing op­por­tu­ni­ties, we will ex­pand our re­search team with a ded­i­cated grant­mak­ing re­searcher, in­vest more re­search hours from ex­ist­ing staff, and try var­i­ous fund­ing mechanisms (e.g., re­quests for pro­pos­als, prizes, teach­ing buy-outs, and schol­ar­ships).

We plan to grow the amount of available fund­ing by pro­vid­ing high-fidelity philan­thropic ad­vice, i.e., for­mats which al­low for sus­tained en­gage­ment (e.g., 1-on-1 ad­vice, work­shops), and in­vest­ing more time into mak­ing our re­search ac­cessible to non-ex­perts.

We are un­cer­tain how many op­por­tu­ni­ties there are for en­abling the kind of work we would like to see out­side of our own or­ga­ni­za­tion. Depend­ing on the re­sults, we will ex­pand our efforts in this area fur­ther.

Other activities

  • Fundrais­ing for EA or­ga­ni­za­tions. Through Rais­ing for Effec­tive Giv­ing, we will con­tinue to fundraise sev­eral mil­lion dol­lars per year for EA char­i­ties. Similarly, we will also con­tinue to en­able Ger­man, Swiss, and Dutch donors to deduct their dona­tions from their taxes when giv­ing to EA char­i­ties around the world, en­abling $600,000 – $1.4 mil­lion in tax sav­ings for EA donors in 2019. How­ever, ex­pand­ing these ac­tivi­ties will not be a pri­or­ity for us next year.

  • Com­mu­nity build­ing. In 2018 we pub­lished a guide on how to run lo­cal groups and hosted a lo­cal group re­treat. We now want to fo­cus more on our core pri­ori­ties, which is why we are in the pro­cess of trans­fer­ring our com­mu­nity-build­ing ac­tivi­ties to CEA, LEAN, and EA lo­cal groups.

  • 1% ini­ti­a­tive. We will bring to a close our bal­lot ini­ti­a­tive in Zurich call­ing for a per­centage of the city’s bud­get to be al­lo­cated to effec­tive global de­vel­op­ment char­i­ties. We ex­pect a fa­vor­able out­come.

Financials

  • Bud­get 2019: $925,000 (12 ex­pected full-time equiv­a­lent em­ploy­ees)

  • Cur­rent re­serves: $1,400,000 (18 months)

  • Room for more fund­ing: $400,000 (to at­tain 24 months of re­serves)

  • Ad­di­tional dona­tions would al­low us to fur­ther build up our grant­mak­ing ca­pac­ity, our in-house re­search team, and op­er­a­tions sup­port for in­di­vi­d­ual re­searchers.

  • If re­serves ex­ceed 24 months at the end of a quar­ter, we will very strongly con­sider al­lo­cat­ing the sur­plus to the EAF Fund.

When does it make sense to sup­port our work?

Our fund­ing situ­a­tion has im­proved a lot com­pared to pre­vi­ous years. For donors who are on the fence about which cause or or­ga­ni­za­tion to sup­port, this is a rea­son to donate el­se­where this year. How­ever, we rely on a very small num­ber of donors for 80% of our fund­ing, so we are look­ing to di­ver­sify our sup­port base.

If you sub­scribe to some form of suffer­ing-fo­cused ethics and want to fo­cus on ways to im­prove the long-term fu­ture, we think sup­port­ing our work is the best bet for achiev­ing that, as we out­line in our dona­tion recom­men­da­tions.

It may also make sense to sup­port our work if (1) you think suffer­ing risks are par­tic­u­larly ne­glected in EA given their ex­pected tractabil­ity, or (2) you are un­usu­ally pes­simistic about the qual­ity of the fu­ture. We think (1) is the stronger rea­son.

Note: It is no longer pos­si­ble to ear­mark dona­tions for spe­cific pro­jects or pur­poses within EAF (e.g., REG or FRI). All dona­tions will by de­fault con­tribute to the en­tire body of work we have out­lined in this post. We might make in­di­vi­d­ual ex­cep­tions for large dona­tions.

Would you like to sup­port us? Make a dona­tion.

Brief re­view of 2018

Or­ga­ni­za­tional updates

  • We hired Anni Leskelä to join our re­search team.

  • We ran a hiring round for two po­si­tions, one of which we have already been able to fill.

  • Ste­fan Torges was named Co-Ex­ec­u­tive Direc­tor, alongside Jonas Vol­lmer.

  • We moved in with Benck­iser Stiftung Zukunft in Ber­lin, a ma­jor foun­da­tion in Ger­many.

  • We in­sti­tuted an aca­demic ad­vi­sory board.

Achievements

Mistakes

  • Re­search. We did not suffi­ciently pri­ori­tize con­soli­dat­ing in­ter­nal re­search and pub­lish­ing a re­search agenda. Do­ing so would likely have led to bet­ter co­or­di­na­tion of both in­ter­nal and ex­ter­nal re­search. We plan to ad­dress these prob­lems in 2019.

  • Strat­egy. We changed our strat­egy too slowly and in­vested in­suffi­cient re­sources into strate­gic plan­ning in light of the in­sight that our pre­vi­ous fundrais­ing ap­proach was ill-suited for fi­nanc­ing the op­por­tu­ni­ties we would now con­sider most im­pact­ful (e.g., in­di­vi­d­ual schol­ars, re­search groups, fairly un­known or­ga­ni­za­tions). As a re­sult, we al­lo­cated staff time sub­op­ti­mally.

  • Fundrais­ing. We had sev­eral promis­ing ini­tial con­ver­sa­tions with ma­jor philan­thropists. How­ever, so far we have been un­able to cap­i­tal­ize on most of these op­por­tu­ni­ties, with some ex­cep­tions. While we do not think we made sig­nifi­cant mis­takes, it seems plau­si­ble that we could have taken bet­ter care of these re­la­tion­ships.

  • Or­ga­ni­za­tional health, di­ver­sity, and in­clu­sion. We did not suffi­ciently pri­ori­tize di­ver­sity and pro­fes­sional de­vel­op­ment. We have now ap­pointed Alfredo Parra as Equal Op­por­tu­nity & Or­ga­ni­za­tional Health Officer and im­ple­mented best prac­tices dur­ing our hiring round as well as for the or­ga­ni­za­tion in gen­eral. We plan to make fur­ther progress in these do­mains in 2019.

  • Oper­a­tions. We were too slow to ad­dress a lack of op­er­a­tions staff ca­pac­ity, which put ex­ces­sive pres­sure on ex­ist­ing staff.

We are in­ter­ested in your feedback

If you have any ques­tions or com­ments, we look for­ward to hear­ing from you; you can also send us your crit­i­cal feed­back anony­mously. We greatly ap­pre­ci­ate any crit­i­cal thoughts that could help us im­prove our work.