Long Term Future Fund: November grant decisions
Hey everyone. The Long Term Future Fund published it’s latest grant decisions a few days ago, and cross posting it here seemed like a good idea. Happy to answer any questions you have.
November 2018 - Long-Term Future Fund Grants
Fund: Long-Term Future Fund
Payout date: November 29, 2018
Payout amount: $95,500.00
Grant author(s): Alex Zhu, Helen Toner, Matt Fallshaw, Matt Wage, Oliver Habryka
Grant recipients:
Grant rationale:
The Long-Term Future Fund has decided on a grant round of approximately USD 95,500, to a mix of newer and more established projects (details below).
In order to close a grant round before the start of Giving Season, we ran a very short application process, and made decisions on a shorter timeline than we plan to in the future. This short timeline meant that there were many applications that we saw as promising, but did not have time to evaluate sufficiently to decide to fund them, so we did not end up granting all of the available funds in this round (approximately USD 120,000). In future grant rounds, we anticipate having more time, and therefore being more likely to spend down the entirety of the fund. We may explicitly reach out to some applicants to suggest they re-submit their applications for future rounds.
Funding to new or smaller projects
AI summer school (Jan Kulveit): USD 21,000
This grant is to fund the second year of a summer school on AI safety, aiming to familiarize potential researchers with interesting technical problems in the field. Last year’s iteration of this event appears to have gone well, based both on public materials and on private knowledge some of us have about participants and their experiences. We believe that well-run education efforts of this kind are valuable (where “well-run” refers to the quality of the intellectual content, the participants, and the logistics of the event), and feel confident enough that this particular effort will be well-run that we decided to support it. This grant fully funds Jan’s request.
Online forecasting community (Ozzie Gooen): USD 20,000
Ozzie sought funding to build an online community of EA forecasters, researchers, and data scientists to predict variables of interest to the EA community. Ozzie proposed using the platform to answer a range of questions, including examples like “How many Google searches will there be for reinforcement learning in 2020?” or “How many plan changes will 80,000 hours cause in 2020?”, and using the results to help EA organizations and individuals to prioritize. We decided to make this grant based on Ozzie’s experience designing and building Guesstimate, our belief that a successful project along these lines could be very valuable, and some team members’ discussions with Ozzie about this project in more detail. This grant funds the project’s basic setup and initial testing.
AI Safety Unconference (Orpheus Lummis and Vaughn DiMarco): CAD 6,000 (approx USD 4,500)
Orpheus Lummis and Vaughn DiMarco are organizing an unconference on AI Alignment on the last day of the NeurIPS conference, with the goal of facilitating networking and research on AI Alignment among a diverse audience of AI researchers with and without safety backgrounds.
We evaluated this grant on similar grounds to the AI-Summer School grant above; based on direct interactions we’ve had with some of the organizers and the calibre of some of the participating established AI Alignment organizations we feel that the project deserves funding. Our understanding is that the organizers are still in the process of finalizing whether or not to go ahead with the unconference, so this funding is conditional on them deciding to proceed. This grant would fully fund Orpheus’ request.
Funding to established organizations
Machine Intelligence Research Institute: USD 40,000
MIRI is seeking funding to pursue the research directions outlined in its recent update. We believe that this research represents one promising approach to AI alignment research. According to their fundraiser post, MIRI believes it will be able to find productive uses for additional funding, and gives examples of ways additional funding was used to support their work this year.
Ought: USD 10,000
Ought is a nonprofit aiming to implement AI alignment concepts in real-world applications. We believe that Ought’s approach is interesting and worth trying, and that they have a strong team. Our understanding is that hiring is currently more of a bottleneck for them than funding, so we are only making a small grant. Part of the aim of the grant is to show Ought as an example of the type of organization we are likely to fund in the future.
Future funding
In total, we received over 50 submissions for funding from smaller projects. Of those submissions, we would have been interested in granting about USD 250 000 (not counting grants to larger or more established organizations), which is more than we expected given the very short application period. This leaves us optimistic about being able to recommend grants of similar quality in the future, for larger funding rounds.
It’s difficult to estimate how much total room for regranting we have, but our rough estimate would be that at least in the near term we can get a similar level of applications every 3 months, resulting in a total of ~USD 800 000 per year for smaller projects we would be interested in funding. Depending on the funding needs of major organizations, and assuming that we judge a 40:60 balance between smaller projects and established organizations to be the best use of resources, then we would estimate that we would be comfortable with regranting about 2 million USD over the calendar year.
- Long Term Future Fund and EA Meta Fund applications open until June 28th by 10 Jun 2019 20:37 UTC; 60 points) (
- Long-Term Future Fund and EA Meta Fund applications open until June 12th by 15 May 2020 12:28 UTC; 48 points) (
- EA Meta Fund and Long-Term Future Fund are looking for applications again until October 11th by 13 Sep 2019 19:34 UTC; 35 points) (
- Long-Term Future Fund and EA Meta Fund applications open until January 31st by 6 Jan 2020 18:26 UTC; 34 points) (
- Long Term Future Fund applications open until June 28th by 10 Jun 2019 20:39 UTC; 30 points) (LessWrong;
- 10 Apr 2019 15:14 UTC; 28 points) 's comment on Long-Term Future Fund: April 2019 grant recommendations by (
- 12 Nov 2019 8:15 UTC; 6 points) 's comment on Update on CEA’s EA Grants Program by (
For what it’s worth, my estimate of total current funding gap in the sector of “small and new projects motivated by long-term”, counting only “what has robustly positive EV by my estimate”, is >$1M.
In general, I think the ecosystem suffered from the spread of several over-simplified memes. One of them is “the field is talent constrained”, the other one “now with the OpenPhil money...”.
One way how to think about it* is projecting the space along two axes: “project size” and “risks/establishedness”. Relative abundance of funding in the [“medium to large”, “low risks / established”] sector does not imply much about the funding situation in the [“small”,”not established”] sector, or [“medium size”,”unproven/risky”] sector.
The [“medium to large”, “low risks / established”] sector is often constrained by a mix of structural limits how fast can organizations grow without large negative side-effects, bottlenecks in hiring, and yes, sometimes, by very specific talent needs. Much less by funding.
On the opposite side, the [“small”,”not established”] sector is probably funding constrained, plus constrained by a lack of advisors and similar support, and inadequacies in the trust network structure.
Long Term Future fund moving to fill part of the funding gap seems great news.
(*I have this from analysis how an x-risk funding organization can work by Karl Koch & strategy team at AI Safety Camp 1, non-public)
I basically agree with this. My part of the estimate above (~$1M) of what we would be comfortable distributing was more based on the relatively simple process we used to get the grant requests. I think using this method, and the current structure of this fund, I would be comfortable giving around ~$1M to small projects, but beyond that I think I would prefer to diversify funders further and would prefer other groups to start making grants, or for us to change our structure to allow us to give away more resources.
Smaller organisation also probably have to pay larger relative cost for failed grants’ attempts. Their main talents have to spend significant amount of time on writing grant proposals (or write shorter proposals of lower quality).
Good points.
Perhaps funding organizations would like better ways of figuring out the risks of supporting new projects? I think valuable work could be done here.
Justin Shovelain came up with that. (Justin and I were both on the strategy team of AISC 1.)
Thanks for sharing your thinking.
You must have got some interesting applications, and individual EAs might want to fund or help fund them.
Could there be a way now or in future to facilitate exchange of info to maker this possible? (With politeness?)
I imagine this has come up before. What were the unsolved blocks?
[Declaring an interest: I submitted and didn’t get funded.]
I personally don’t see a major problem with this, but do think that individualized applications were very valuable to us in making the grant decisions (i.e. we asked specifically how a project will affect the far-future, a question that might not be asked in a standardized application).
Obviously we would never want to share applications without asking the applicants first, but it seems pretty plausible to me to add a checkbox that you can check in your application, which if checked will publish your application publicly so that other funders can also take a look.