Long-Term Future Fund: November 2019 short grant writeups
Since we’ve been dealing with a larger-than-usual set of commitments for the Long-Term Future Fund, including some internal restructuring, discussion of fund scope, and coordination of fundraising initiatives, we did not end up having enough time to produce a set of writeups with as much detail as those written for past rounds.
As a result, the following report consists of a relatively straightforward list of the grants we made, with short explanations of the reasoning behind them. I (Oliver Habryka) am planning to follow this up in a few weeks with more detailed explanations of my reasoning, and other fund members might do the same. I will still be available to respond to comments and questions in the comment section.
All the writeups here were written by me (Oliver Habryka), but do in some cases represent more of the fund team consensus than usual.
Grants Made By the Long-Term Future Fund
Each grant recipient is followed by the size of the grant and their one-sentence description of their project. All of these grants have been made.
Damon Pourtahmaseb-Sasi ($40,000): Subsidized therapy/coaching/mediation for those working on the future of humanity.
Tegan McCaslin ($40,000): Conducting independent research into AI forecasting and strategy questions.
Vojtěch Kovařík ($43,000): Research funding for a year, to enable a transition to AI safety work.
Jaspreet Pannu ($18,000): Surveying the neglectedness of broad-spectrum antiviral development.
John Wentworth ($30,000): Build a theory of abstraction for embedded agency using real-world systems for a tight feedback loop.
Elizabeth E. Van Nostrand ($19,000): Create a toolkit to bootstrap from zero to competence in ambiguous fields.
Daniel Demski ($30,000): Independent research on agent foundations.
Sam Hilton ($62,000): Supporting the rights of future generations in UK policy and politics.
Topos Institute ($52,000): A summit for the world’s leading applied category theorists to engage with human flourishing experts.
Jason Crawford ($25,000): Tell the story of human progress to the world, and promote progress as a moral imperative.
Kyle Fish ($30.000): Identifying white space opportunities for technical projects to improve biosecurity.
AI Safety Camp Toronto ($29,000): AISC Toronto brings together aspiring researchers to work on concrete problems in AI safety.
Miranda Dixon-Luinenburg ($20,000): Writing fiction to convey EA and rationality-related topics.
Roam Research ($20,000): A note-taking tool for networked thought, actively used by many EA researchers.
Joe Collman ($10,000): Investigation of AI Safety Via Debate and ML training.
Total distributed: $471,000
Writeups by Oliver Habryka
Damon Pourtahmaseb-Sasi ($40,000)
Subsidized therapy/coaching/mediation for those working on the future of humanity.
We are aware of a significant number of people (including many full-time employees) within the EA and longtermist communities who struggle with depression, anxiety, and other mental health problems. I think it makes sense to provide members of those communities with therapy and coaching sessions, which seem to be relatively effective at helping with those problems (the exact effect sizes are a highly disputed question, but it seems to me that on net, therapy and coaching seem to help a good amount). I also think that a major benefit is that some EAs are unwilling to see therapists who they expect to not understand their values or beliefs; they may be more willing to pursue therapy, and make more progress, with someone familiar with those values and beliefs.
Damon is a licensed therapist who has been offering services to people working in high-impact areas for the past year, and this grant is to allow him to spend a larger fraction of his time over the next year helping people working on high-impact projects, as well as to relocate to California to be able to offer his services to a larger number of people (he is currently located in Florida, where none of his current clients are located).
We’ve received a very large number of overwhelmingly positive testimonials that were sent to us from his current clients via an independent channel (i.e. Damon did not filter those testimonials for positive ones). This was one of the key pieces of evidence that led me to recommend this grant.
Tegan McCaslin ($40,000)
Conducting independent research into AI forecasting and strategy questions.
This is in significant part a continuation of our previous grant to Tegan for research into AI forecasting and strategy questions. Since then, Tegan has worked with other researchers I trust, and has received sufficient positive testimonials to make me comfortable with this grant. She also sent us some early drafts of research on comparing evolutionary optimization processes with current deep learning systems, which she is planning to publish soon, and which I think is promising enough to be worth funding. She also sent us some early draft work on long-term technological forecasting (10+ years into the future) that I also thought was promising.
Vojtěch Kovařík ($43,000)
Research funding for a year, to enable a transition to AI safety work.
Vojtěch previously did research in mathematics and game theory. He just finished an internship at FHI and is now interested in exploring a full-time career in AI Safety. To do so, he plans to spend a year doing research visits at various organizations and exploring some research directions he is excited about.
According to an FHI researcher we spoke to, Vojtěch seems to have performed well during his time at FHI, so it seemed good to allow him to try transitioning into a full-time AI safety role.
Jaspreet Pannu ($18,000)
Surveying the neglectedness of broad-spectrum antiviral development.
Jaspreet just finished her FHI summer fellowship. She’s now interested in translating an internal report on broad-spectrum antivirals (which she wrote during the fellowship) into two peer-reviewed publications.
She received positive testimonials from the people she worked with at FHI, and the development of broad-spectrum antivirals seems like a promising direction for reducing the chance of bioengineering-related catastrophes.
John Wentworth ($30,000)
Building a theory of abstraction for embedded agency using real-world systems for a tight feedback loop.
John participated in the recent MIRI Summer Fellows Program, where he proposed some research directions to other MIRI researchers that they were excited about. As well as receiving multiple strong testimonials from various AI alignment researchers, he has also been very actively posting his ideas to the AI Alignment Forum, where he has received substantive engagement and positive comments from several top researchers; this is one of the main reasons for this grant.
Elizabeth E. Van Nostrand ($19,000)
Creating a toolkit to bootstrap from zero to competence in ambiguous fields.
Elizabeth has a long track record of writing online about various aspects of effective altruism, rationality and cause prioritization, and also has a track record of doing high-quality independent research for a variety of clients.
Elizabeth is planning to more fully understand how people can come to quickly orient themselves in complicated fields like history and other social sciences, particularly in domains that are relevant to the long-term future (like the structure of the Scientific and Industrial Revolutions, as well as the factors behind civilizational collapse).
Daniel Demski ($30,000)
Independent research on agent foundations.
Daniel attended the MIRI Summer Fellows Program in 2017 and 2018, as well as the AI Summer Fellows program in 2018. During those periods, he developed some research directions that multiple researchers I contacted were excited about, and he received positive testimonials from the people he worked with at MIRI.
From his application:
My main focus for the first few months will be completing a collaborative paper on foundations of decision theory which began as discussions at MSFP 2018. The working title is “Helpful Foundations”, and a very rough working draft can be seen here. The overall strategy is to first assume an agent given a specific scenario (world) would have some preferences over its actions. We then use VNM axioms to represent its preferences in each possible world as utilities. Pareto improvements are used to aggregate preferences across possible worlds, and a version of the Complete Class Theorem is used to derive a prior and utilities. However, because of the weight pulled by the CCT, it looks like we will be able to remove one or more VNM axioms and still arrive at our result.
Sam Hilton ($62,000)
Supporting the rights of future generations in UK policy and politics.
Sam Hilton runs the All-Party Parliamentary Group (APPG) for Future Generations in the British Parliament, and seems to have found significant traction with this project; many members of Parliament have engaged with the APPG and found their inputs valuable. This funding will support staff and other costs of the APPG’s secretariat, enabling the group to work more effectively.
Topos Institute ($52,000)
A summit for the world’s leading applied category theorists to engage with human flourishing experts.
David Spivak and Brendan Fong, the two co-founders of the Topos Institute, are applying category theory to various problems in AI alignment and other areas I think are important, and are organizing a conference to facilitate exchange between the category theory community and people currently working on various technical problems around the long-term future.
More recently various researchers I have talked to in AI alignment have found various aspects of category theory quite useful when trying to solve certain technical problems, and David Spivak has a strong track record of academic and educational contributions.
Jason Crawford ($25,000)
Telling the story of human progress to the world, and promote progress as a moral imperative.
All of Jason’s work is in the domain of Progress Studies. He works on understanding what the primary causes of historical technological progress were, and what the broad effects of different types of technological progress have been. Since I consider most catastrophic risks to be the result of badly controlled emerging technologies, understanding the historical causes of technological progress, and humanity’s track record in controlling those technologies, is an essential part of thinking about global catastrophic risk and the long-term future.
I also consider this grant to be valuable because Jason seems to be a very capable researcher who has attracted the attention of multiple people whose thinking I respect a lot (he also received a grant from Tyler Cowen’s Emergent Ventures). I think there is a good chance he could become a highly influential public writer, and having him collaborate with researchers and thinkers working on global catastrophic risks could be very valuable in the long-run.
I also think that his current research is going to be highly and directly relevant in worlds where catastrophic risks are not the primary type of issue that turns out to be important, and where human flourishing through technological progress may be the most important cause area. (This reasoning is similar to that behind last round’s grant toward improving reproducibility in science.)
Kyle Fish ($30,000)
Identifying white space opportunities for technical projects to improve biosecurity.
From his application:
I plan to produce a technical report on opportunities for science and engineering projects to improve biosecurity and pandemic preparedness. Biosecurity is an established cause area in the EA community, and a variety of resources provide high-level overviews of potential paths to impact (careers in policy, direct work in synthetic biology, public health, etc.). However, there is a need for a clearer and deeper understanding of how technical expertise in relevant science and engineering disciplines can best be leveraged in this space. The report will cover three core topics: 1) technical analysis of relevant science and engineering subfields (ie vaccine development and vaccine alternatives, novel pathogen detection systems, emerging synthetic biology techniques). 2) the current landscape of organizations, academic labs, companies, and individuals working on technical problems in biosecurity, and summaries of the projects already underway. 3) An analysis of the white space opportunities where additional science and engineering innovation ought to be prioritized to mitigate biorisks.
I hope this project will ultimately reduce the risk of catastrophic damage from natural or engineering pathogens. This impact will likely be realized through a variety of different uses of the report:
+ As a guide for scientists and engineers interested in working on biosecurity, by providing a clear summary of the current state of the space and the technical project types they should consider pursuing
+ As a resource for the current biosecurity community to better understand the landscape of technical projects already underway
+ As a resource for grantmakers to inform funding decisions and prioritization
+ As a means of deepening my own understanding of opportunities in the biorisk space as I consider a more substantive shift toward a biosecurity-focused career trajectory
Given the difficulty of assessing biosecurity threats, it is unlikely that direct connections between this report and quantifiable reductions in biorisk will be possible. There are, however, a variety of proxy metrics that can be used to measure impact. Potential metrics include the number of individuals who use this report to inform a partial or complete career change or shift in technical focus, relative impact estimates for such changes, number of technical projects launched that align with the white spaces identified, and dollar amounts of funding allocated to such projects. Subjective evaluations of potential impact by current experts in the biosecurity space may also be useful. The best measurement strategy will depend in large part on the manner in which this report is ultimately distributed.
We also reached out to a variety of researchers we trust in the domain of biosecurity who gave strong positive feedback about Kyle’s project and his skills. He’s also spoken in the past at EA Events about biotech initiatives in clean meat, and has been working as a researcher on clean meat for the last few years, which provides him with a lot of the relevant biotech background to work in this space.
AI Safety Camp #5 ($29,000)
Bringing together aspiring researchers to work on concrete problems in AI safety.
This is our third grant to the AI Safety Camp, so the basic reasoning from past rounds is mostly the same. This round, I reached out to more past participants and received responses that were, overall, quite positive. I’ve also started thinking that the reference class of things like the AI Safety Camp is more important than I had originally thought.
Miranda Dixon-Luinenburg ($20,000)
Writing fiction to convey EA and rationality-related topics.
This is a continuation of a grant we made last round, so our reasoning remains effectively the same. Miranda sent us some drafts and documents that seem promising enough to be worth further funding, though we think that after this round she should likely seek independent funding, since we hope that her book will be far enough along by then to get more of an outside perspective on the project and potentially get funding from other funders.
Roam Research ($20,000)
A note-taking tool for networked thought, actively used by many EA researchers.
We have previously made a grant to Roam Research. Since then, a large number of researchers and other employees at organizations working in priority areas have started using Roam and seem to have benefited a lot from it. We received a large number of positive testimonials, and I’ve also found the product to be well-designed.
Despite that, our general sense is that Roam should try to attract external funding after this round, and we are not planning to recommend future grants to Roam (mostly due to it being well-suited to seeking more broader funding).
Joe Collman ($10,000)
Investigation of AI Safety Via Debate and ML training.
From the application:
I aim to work on a solo project with the guidance of David Krueger, with two main purposes:
The first is to learn and upskill in AI safety related areas.
The second is to explore AI safety questions focused on AI safety via debate (https://arxiv.org/abs/1805.00899), and connected ideas.
I think that David Krueger is doing good work in the space of AI alignment, and that funding Joe to work on things David thinks are important seems worth the small amount of requested funding. We recommended this grant mostly on the basis of referrals and testimonials. David has been collaborating with many good people I trust quite a bit over the past few years (FHI, Deepmind, CHAI and 80k), so that’s where a lot of my trust comes from.