Long Term Future Fund: November grant decisions

Hey ev­ery­one. The Long Term Fu­ture Fund pub­lished it’s lat­est grant de­ci­sions a few days ago, and cross post­ing it here seemed like a good idea. Happy to an­swer any ques­tions you have.

Novem­ber 2018 - Long-Term Fu­ture Fund Grants

Fund: Long-Term Fu­ture Fund

Pay­out date: Novem­ber 29, 2018

Pay­out amount: $95,500.00

Grant au­thor(s): Alex Zhu, He­len Toner, Matt Fal­lshaw, Matt Wage, Oliver Habryka

Grant re­cip­i­ents:

Grant ra­tio­nale:

The Long-Term Fu­ture Fund has de­cided on a grant round of ap­prox­i­mately USD 95,500, to a mix of newer and more es­tab­lished pro­jects (de­tails be­low).

In or­der to close a grant round be­fore the start of Giv­ing Sea­son, we ran a very short ap­pli­ca­tion pro­cess, and made de­ci­sions on a shorter timeline than we plan to in the fu­ture. This short timeline meant that there were many ap­pli­ca­tions that we saw as promis­ing, but did not have time to eval­u­ate suffi­ciently to de­cide to fund them, so we did not end up grant­ing all of the available funds in this round (ap­prox­i­mately USD 120,000). In fu­ture grant rounds, we an­ti­ci­pate hav­ing more time, and there­fore be­ing more likely to spend down the en­tirety of the fund. We may ex­plic­itly reach out to some ap­pli­cants to sug­gest they re-sub­mit their ap­pli­ca­tions for fu­ture rounds.

Fund­ing to new or smaller projects

AI sum­mer school (Jan Kul­veit): USD 21,000

This grant is to fund the sec­ond year of a sum­mer school on AI safety, aiming to fa­mil­iarize po­ten­tial re­searchers with in­ter­est­ing tech­ni­cal prob­lems in the field. Last year’s iter­a­tion of this event ap­pears to have gone well, based both on pub­lic ma­te­ri­als and on pri­vate knowl­edge some of us have about par­ti­ci­pants and their ex­pe­riences. We be­lieve that well-run ed­u­ca­tion efforts of this kind are valuable (where “well-run” refers to the qual­ity of the in­tel­lec­tual con­tent, the par­ti­ci­pants, and the lo­gis­tics of the event), and feel con­fi­dent enough that this par­tic­u­lar effort will be well-run that we de­cided to sup­port it. This grant fully funds Jan’s re­quest.

On­line fore­cast­ing com­mu­nity (Ozzie Gooen): USD 20,000

Ozzie sought fund­ing to build an on­line com­mu­nity of EA fore­cast­ers, re­searchers, and data sci­en­tists to pre­dict vari­ables of in­ter­est to the EA com­mu­nity. Ozzie pro­posed us­ing the plat­form to an­swer a range of ques­tions, in­clud­ing ex­am­ples like “How many Google searches will there be for re­in­force­ment learn­ing in 2020?” or “How many plan changes will 80,000 hours cause in 2020?”, and us­ing the re­sults to help EA or­ga­ni­za­tions and in­di­vi­d­u­als to pri­ori­tize. We de­cided to make this grant based on Ozzie’s ex­pe­rience de­sign­ing and build­ing Guessti­mate, our be­lief that a suc­cess­ful pro­ject along these lines could be very valuable, and some team mem­bers’ dis­cus­sions with Ozzie about this pro­ject in more de­tail. This grant funds the pro­ject’s ba­sic setup and ini­tial test­ing.

AI Safety Un­con­fer­ence (Or­pheus Lum­mis and Vaughn DiMarco): CAD 6,000 (ap­prox USD 4,500)

Or­pheus Lum­mis and Vaughn DiMarco are or­ga­niz­ing an un­con­fer­ence on AI Align­ment on the last day of the NeurIPS con­fer­ence, with the goal of fa­cil­i­tat­ing net­work­ing and re­search on AI Align­ment among a di­verse au­di­ence of AI re­searchers with and with­out safety back­grounds.

We eval­u­ated this grant on similar grounds to the AI-Sum­mer School grant above; based on di­rect in­ter­ac­tions we’ve had with some of the or­ga­niz­ers and the cal­ibre of some of the par­ti­ci­pat­ing es­tab­lished AI Align­ment or­ga­ni­za­tions we feel that the pro­ject de­serves fund­ing. Our un­der­stand­ing is that the or­ga­niz­ers are still in the pro­cess of fi­nal­iz­ing whether or not to go ahead with the un­con­fer­ence, so this fund­ing is con­di­tional on them de­cid­ing to pro­ceed. This grant would fully fund Or­pheus’ re­quest.

Fund­ing to es­tab­lished organizations

Ma­chine In­tel­li­gence Re­search In­sti­tute: USD 40,000

MIRI is seek­ing fund­ing to pur­sue the re­search di­rec­tions out­lined in its re­cent up­date. We be­lieve that this re­search rep­re­sents one promis­ing ap­proach to AI al­ign­ment re­search. Ac­cord­ing to their fundraiser post, MIRI be­lieves it will be able to find pro­duc­tive uses for ad­di­tional fund­ing, and gives ex­am­ples of ways ad­di­tional fund­ing was used to sup­port their work this year.

Ought: USD 10,000

Ought is a non­profit aiming to im­ple­ment AI al­ign­ment con­cepts in real-world ap­pli­ca­tions. We be­lieve that Ought’s ap­proach is in­ter­est­ing and worth try­ing, and that they have a strong team. Our un­der­stand­ing is that hiring is cur­rently more of a bot­tle­neck for them than fund­ing, so we are only mak­ing a small grant. Part of the aim of the grant is to show Ought as an ex­am­ple of the type of or­ga­ni­za­tion we are likely to fund in the fu­ture.

Fu­ture funding

In to­tal, we re­ceived over 50 sub­mis­sions for fund­ing from smaller pro­jects. Of those sub­mis­sions, we would have been in­ter­ested in grant­ing about USD 250 000 (not count­ing grants to larger or more es­tab­lished or­ga­ni­za­tions), which is more than we ex­pected given the very short ap­pli­ca­tion pe­riod. This leaves us op­ti­mistic about be­ing able to recom­mend grants of similar qual­ity in the fu­ture, for larger fund­ing rounds.

It’s difficult to es­ti­mate how much to­tal room for re­grant­ing we have, but our rough es­ti­mate would be that at least in the near term we can get a similar level of ap­pli­ca­tions ev­ery 3 months, re­sult­ing in a to­tal of ~USD 800 000 per year for smaller pro­jects we would be in­ter­ested in fund­ing. Depend­ing on the fund­ing needs of ma­jor or­ga­ni­za­tions, and as­sum­ing that we judge a 40:60 bal­ance be­tween smaller pro­jects and es­tab­lished or­ga­ni­za­tions to be the best use of re­sources, then we would es­ti­mate that we would be com­fortable with re­grant­ing about 2 mil­lion USD over the cal­en­dar year.