Recently, Elon Musk donated $10M to fund research on making AI more robust and beneficial, motivated in part by Nick Bostrom’s book Superintelligence and by AI’s links to existential risk.
Many EAs I know are interested in the relationship between artificial intelligence and existential risk, and there has been some discussion here of long-termAIsafety as a topic for long-run focused EA. Given this, I thought it’d make sense to post the request for proposals for research projects to be funded by Musk’s donation. I’d be very happy to see some applications come out of the broader EA community, so do think about it yourself and pass it along to friends!
If you have questions, feel free to ask them in the comments or to contact me!
Here’s the email FLI has been sending around:
Initial proposals (300–1000 words) due March 1, 2015
The Future of Life Institute, based in Cambridge, MA and headed by Max Tegmark (MIT), is seeking proposals for research projects aimed to maximize the future societal benefit of artificial intelligence while avoiding potential hazards. Projects may fall in the fields of computer science, AI, machine learning, public policy, law, ethics, economics, or education and outreach. This 2015 grants competition will award funds totaling $6M USD.
This funding call is limited to research that explicitly focuses not on the standard goal of making AI more capable, but on making AI more robust and/or beneficial; for example, research could focus on making machine learning systems more interpretable, on making high-confidence assertions about AI systems’ behavior, or on ensuring that autonomous systems fail gracefully. Funding priority will be given to research aimed at keeping AI robust and beneficial even if it comes to greatly supersede current capabilities, either by explicitly focusing on issues related to advanced future AI or by focusing on near-term problems, the solutions of which are likely to be important first steps toward long-term solutions.
Please do forward this email to any colleagues and mailing lists that you think would be appropriate.
Proposals
Before applying, please read the complete RFP and list of example topics, which can be found online along with the application form:
As explained there, most of the funding is for $100K–$500K project grants, which will each support a small group of collaborators on a focused research project with up to three years duration. For a list of suggested topics, see the complete RFP [1] and the Research Priorities document [2]. Initial proposals, which are intended to require merely a modest amount of preparation time, must be received on our website [1] on or before March 1, 2015.
Initial proposals should include a brief project summary, a draft budget, the principal investigator’s CV, and co-investigators’ brief biographies. After initial proposals are reviewed, some projects will advance to the next round, completing a Full Proposal by May 17, 2015. Public award recommendations will be made on or about July 1, 2015, and successful proposals will begin receiving funding in September 2015.
Request for proposals for Musk/FLI AI research grants
Recently, Elon Musk donated $10M to fund research on making AI more robust and beneficial, motivated in part by Nick Bostrom’s book Superintelligence and by AI’s links to existential risk.
Many EAs I know are interested in the relationship between artificial intelligence and existential risk, and there has been some discussion here of long-term AI safety as a topic for long-run focused EA. Given this, I thought it’d make sense to post the request for proposals for research projects to be funded by Musk’s donation. I’d be very happy to see some applications come out of the broader EA community, so do think about it yourself and pass it along to friends!