Launching the EAF Fund

The Effec­tive Altru­ism Foun­da­tion is launch­ing the EAF Fund, a new fund fo­cused on re­duc­ing s-risks. In this post we want to out­line its mis­sion, likely pri­or­ity ar­eas, and fund man­age­ment struc­ture. We also ex­plain when it makes sense to donate to this fund.

Summary

  • The fund’s mis­sion is to ad­dress the worst s-risks from ar­tifi­cial in­tel­li­gence.

  • Pri­or­ity ar­eas for grants will likely be de­ci­sion the­ory and bar­gain­ing, AI al­ign­ment and fail-safe ar­chi­tec­tures, macros­trat­egy re­search, and AI gov­er­nance. There is some chance we might also make grants re­lated to so­cial sci­ence re­search on con­flicts and moral cir­cle ex­pan­sion.

  • Fund man­agers Lukas Gloor, Brian To­masik, and Jonas Vol­lmer will make grants with a sim­ple ma­jor­ity vote.

  • The cur­rent bal­ance is $68,638 (as of Novem­ber 27), and we ex­pect to be able to al­lo­cate $400k–$1.5M dur­ing the first year. We will likely try differ­ent mechanisms for proac­tively en­abling the kind of re­search we’d like to see, e.g. re­quests for pro­pos­als, prizes, teach­ing buy-outs, and schol­ar­ships.

  • You should give to this fund if you pri­ori­tize im­prov­ing the qual­ity of the long-term fu­ture, es­pe­cially with re­gards to re­duc­ing s-risks from AI. You can donate to this fund via the Effec­tive Altru­ism Foun­da­tion (donors from Ger­many, Switzer­land, the Nether­lands) or the EA Funds Plat­form (donors from the US or the UK).

Mission

The fund’s fo­cus is on im­prov­ing the qual­ity of the long-term fu­ture by sup­port­ing efforts to re­duce the worst s-risks from ad­vanced ar­tifi­cial in­tel­li­gence. (ed­ited for clar­ity; see com­ment sec­tion)

Pri­or­ity areas

Based on this mis­sion, we have iden­ti­fied the fol­low­ing pri­or­ity ar­eas, which may shift as we learn more.

Tier 1

  • De­ci­sion the­ory. It’s plau­si­ble that out­comes of mul­ti­po­lar AI sce­nar­ios are to some de­gree shaped by the de­ci­sion the­o­ries of the AI sys­tems in­volved. We want to con­tribute to a higher like­li­hood of co­op­er­a­tive out­comes since con­flicts are a plau­si­ble con­tender for cre­at­ing large amounts of dis­value.

  • AI al­ign­ment and fail-safe ar­chi­tec­tures. Some AI failure modes are worse than oth­ers. We aim to differ­en­tially sup­port al­ign­ment ap­proaches where the risks are low­est. Work that en­sures com­par­a­tively be­nign out­comes in the case of failure is par­tic­u­larly valuable from our per­spec­tive. Sur­ro­gate goals are one such ex­am­ple.

  • Macros­trat­egy re­search. There are many un­re­solved ques­tions about how to im­prove the qual­ity of the long-term fu­ture. Ad­di­tional re­search could un­earth new cru­cial con­sid­er­a­tions which would change our pri­ori­ti­za­tion.

  • AI gov­er­nance. The norms and rules gov­ern­ing the de­vel­op­ment of AI sys­tems will shape the strate­gic and tech­ni­cal out­come. Estab­lish­ing co­op­er­a­tive and pru­den­tial norms in the rele­vant re­search com­mu­ni­ties could be a way to avoid bad out­comes.

Tier 2

  • The­ory and his­tory of con­flict. By us­ing his­toric ex­am­ples or game the­o­ret­i­cal anal­y­sis, we could gain a bet­ter un­der­stand­ing of the fun­da­men­tal dy­nam­ics of con­flicts, which might in turn lead to in­sights that are also ap­pli­ca­ble to con­flicts in­volv­ing AI sys­tems.

  • Mo­ral cir­cle ex­pan­sion. Mak­ing sure that all sen­tient be­ings are af­forded moral con­sid­er­a­tion is an­other fairly broad lever to im­prove the qual­ity of the long-term fu­ture.

Past grants

  • $26,000 to Re­think Pri­ori­ties: We funded their sur­veys on de­scrip­tive pop­u­la­tion ethics be­cause learn­ing more about these val­ues and at­ti­tudes may in­form peo­ple’s pri­ori­ti­za­tion and po­ten­tial moral trades. We also made a smaller grant for a sur­vey in­ves­ti­gat­ing the at­ti­tudes to­ward re­duc­ing wild an­i­mal suffer­ing.

  • $27,450 to Daniel Koko­ta­jlo: Daniel will col­lab­o­rate with Cas­par Oester­held and Jo­hannes Treut­lein to pro­duce his dis­ser­ta­tion at the in­ter­sec­tion of AI and de­ci­sion the­ory. His pro­ject will ex­plore acausal trade in par­tic­u­lar and co­or­di­na­tion mechanisms be­tween AI sys­tems more gen­er­ally. This work is rele­vant to AI safety, AI policy, and cause pri­ori­ti­za­tion. This grant will buy him out of teach­ing du­ties dur­ing his PhD to al­low him to fo­cus on this work full-time.

Fund management

As we learn more, we might make changes to this ini­tial setup.

Fund managers

We chose the fund man­agers based on their fa­mil­iar­ity with the fund’s mis­sion and pri­ori­ti­za­tion, the amount of time they can ded­i­cate to this work, and rele­vant re­search ex­per­tise. They were ap­proved by the board of EAF.

  • Lukas Gloor is re­spon­si­ble for pri­ori­ti­za­tion at the Effec­tive Altru­ism Foun­da­tion, and co­or­di­nates our re­search with other or­ga­ni­za­tions. He con­cep­tu­al­ized worst-case AI safety, and helped coin and es­tab­lish the term s-risks. Cur­rently, his main re­search fo­cus is on bet­ter un­der­stand­ing how differ­ent AI al­ign­ment ap­proaches af­fect worst-case out­comes.

  • Brian To­masik has writ­ten pro­lifi­cally and com­pre­hen­sively about ethics, an­i­mal welfare, ar­tifi­cial in­tel­li­gence, and the long-term fu­ture from a suffer­ing-fo­cused per­spec­tive. His ideas have been very in­fluen­tial in the effec­tive al­tru­ism move­ment, and he helped found the Foun­da­tional Re­search In­sti­tute, a pro­ject of the Effec­tive Altru­ism Foun­da­tion, which he still ad­vises. He grad­u­ated from Swarth­more Col­lege in 2009, where he stud­ied com­puter sci­ence, math­e­mat­ics, statis­tics, and eco­nomics.

  • Jonas Vol­lmer is the Co-Ex­ec­u­tive Direc­tor of the Effec­tive Altru­ism Foun­da­tion where he is re­spon­si­ble for set­ting the strate­gic di­rec­tion, com­mu­ni­ca­tions with the effec­tive al­tru­ism com­mu­nity, and gen­eral man­age­ment. He holds de­grees in medicine and eco­nomics with a fo­cus on health eco­nomics and de­vel­op­ment eco­nomics. He pre­vi­ously served on the boards of sev­eral char­i­ties, and is an ad­vi­sor to the EA Long-term Fu­ture Fund.

Grantmaking

The cur­rent bal­ance of the fund is $68,638 (as of Novem­ber 27), and we ex­pect to be able to al­lo­cate $400k–$1.5M dur­ing the first year. We will likely try differ­ent mechanisms for proac­tively en­abling the kind of re­search we’d like to see, e.g. re­quests for pro­pos­als, prizes, teach­ing buy-outs, and schol­ar­ships.

Given the cur­rent state of aca­demic re­search on s-risks, it’s im­pos­si­ble to find se­nior aca­demic schol­ars who could judge the merit of a pro­posal based on its ex­pected im­pact. How­ever, we will con­sult do­main ex­perts where we think their judg­ment adds value to the eval­u­a­tion. We also ran a hiring round for a re­search an­a­lyst, whom we ex­pect to sup­port the fund man­agers. They may also take on more grant­mak­ing re­spon­si­bil­ities over time.

Grant re­cip­i­ents may be char­i­ta­ble or­ga­ni­za­tions, aca­demic in­sti­tu­tions, or in­di­vi­d­u­als. How­ever, we ex­pect to of­ten fund in­di­vi­d­ual re­searchers and small groups as op­posed to large or­ga­ni­za­tions or in­sti­tutes. Grants are ap­proved by a sim­ple ma­jor­ity of the fund man­agers. We ex­pect grants to be made at least ev­ery six months.

We will ex­per­i­ment with differ­ent for­mats for pub­lish­ing our rea­son­ing be­hind in­di­vi­d­ual grant de­ci­sions and eval­u­at­ing past grants (e.g. try­ing to use pre­dic­tions). This will likely de­pend on the num­ber and size of grants.

When should you give to this fund?

CEA has already writ­ten up rea­sons for giv­ing to funds in gen­eral. We won’t re­peat them here. So when does it make sense to give to this fund in par­tic­u­lar?

  • You think long-ter­mism, broadly con­strued, should guide your de­ci­sions.

  • You think there is a sig­nifi­cant chance of AI profoundly shap­ing the long-term fu­ture.

When does it make sense to give to this fund in­stead of the EA Long-term Fu­ture Fund?

  • You are in­ter­ested in im­prov­ing the qual­ity of the long-term fu­ture, ad­dress­ing s-risks from AI in par­tic­u­lar. This might be the re­sult of your nor­ma­tive views, e.g. a strong fo­cus on suffer­ing, from pes­simistic em­piri­cal be­liefs about the long-term fu­ture, or from think­ing that s-risks are cur­rently ne­glected.

  • You trust the judg­ments of the fund man­agers or the Effec­tive Altru­ism Foun­da­tion.

How to donate to this fund

You can donate to this fund via the Effec­tive Altru­ism Foun­da­tion (donors from Ger­many, Switzer­land, the Nether­lands) or the EA Funds Plat­form (donors from the US or the UK).

Note: Un­til De­cem­ber 29 dona­tions to the EAF Fund can be matched 1:1 as part of a match­ing challenge. (For the match­ing challenge we’re still us­ing the former name “REG Fund”.)