Scalable longtermist projects: Speedrun series – Introduction
This is the introductory post in a sequence showcasing a series of mini-research projects (“speedruns”) into scalable longtermist projects, conducted by Rethink Priorities’ General Longtermism Team in the fall of 2022. Each speedrun involved an initial scoping and evaluation of an idea for a scalable longtermist project, to try to identify if the project could be a top candidate for our team to try to bring about.
This post explains how and why you might want to use the speedruns. The appendix contains additional information about how and why we produced the speedruns.
Who is this sequence for?
We imagine the speedruns might be interesting to:
Potential longtermist entrepreneurs who might want to use the speedruns as a source of information and inspiration when deciding what projects to work on.
Potential funders of entrepreneurial longtermist projects (for the same reason).
Researchers who might be interested in looking further into any of the project ideas.
(Aspiring) junior researchers interested in empirical global priorities research who might want to use the speedruns as an example of what such research can look like.
Stakeholders and the public interested in Rethink Priorities’ General Longtermism team who might want to use the speedruns as an insight into how we work, such as potential funders and job applicants.
Things to keep in mind when using the speedruns
The speedruns are very preliminary and should not be considered the final word on whether a project is promising or not. They were written by junior generalists, and we spent only ~15h on each, prioritizing speed (surprise surprise) over depth and rigor. So they likely contain mistakes, miss important information, and include poor takes. We would have conducted a more in-depth investigation before launching any of these projects (and recommend that others do the same).
The project ideas covered in the speedruns are very exploratory and tentative. They are not plans for what RP will work on, but just ideas for what RP could consider working on.
The three speedruns in this series should not be considered as “the three most promising projects in our view”. “Project promisingness” was not a criterion in deciding which speedruns to publish (more on how we chose which speedruns to publish in the Appendix). That said, all topics we conducted speedruns on scored relatively highly on an internal weighted factor model.
Opinions on whether a given project is worth pursuing will differ for people in a different position to our team. The speedruns were conducted with the specific aim of helping our team figure out which projects to spend further resources on. So they take into account factors that are specific to our team and strategy, such as our relative fit to work on the project in question.
We have not updated the conclusions of the speedrun to reflect the recent changes to the funding situation. The speedruns were all conducted before the drastic changes in the EA funding landscape towards the end of 2022, and so they operate with a rough cost-effectiveness bar that is probably outdated.
Overview of the sequence
So far, we are planning for this series to contain 3 of the 13 speedruns we conducted in fall 2022. There’s no particular order we recommend reading them in.
The speedruns we’re planning to include in this sequence (so far) are:
These are the speedruns that (a) did not develop into longer-term research projects which might have other publishable outputs, and (b) were close to a publicly legible format at the start of 2023 (more on how we chose which speedruns to publish, in the Appendix). We might continue to add to this sequence with more speedruns in the future.
Appendix: Further details on why and how we produced the speedruns
What are speedruns?
The speedruns were short (10-15h) research projects conducted over a few days.
The overarching aim was to help our team assess whether we wanted to invest significant further time in exploring a given project. The speedruns did this by:
Scoping out the project to define it and its different variations more clearly,
Doing a shallow analysis of cost-effectiveness, downside risk, marginal value of additional projects in this space, and other relevant factors.
The idea was to get an initial sense of the promisingness of a project and the degree to which further work might change our assessment. We explicitly prioritized speed over rigor in order to get an okay sense of the promisingness of several projects, rather than a good sense of the promisingness of a few, such that we could prioritize further research.
All of the speedruns were conducted by junior researchers (fellows or research assistants who were in their first couple of months on the team) without expertise in the specific area of the speedrun. We thought that, in addition to the direct information value of the speedrun, speedruns would also be useful practice for junior researchers to build research skills and that this would have important talent development benefits.
Why did we produce the speedruns?
This section is a summary of the relevant parts of our 2022 summary post; see that post for a full explanation of how speedruns fit into our strategy.
Broadly, we produced the speedruns in order to identify top candidates projects for our team to try to help incubate.
The primary aim of our team in 2022 was to facilitate the creation of faster and better longtermist “megaprojects”, i.e., projects that we believe have a decent shot of reducing existential risk at scale (spending hundreds of millions of dollars per year). (However, in practice we focused more on somewhat scalable projects that could be piloted much more cheaply; hence the title of this post.)
We tested several different approaches to making this happen, one of which was to identify a small set of project ideas we thought looked unusually promising, then attempt to recruit founders to work on those projects (elsewhere called the “project-first approach”).
To identify the set of project ideas, we made a rough weighted factor model to prioritize which ideas to research further, then conducted speedruns on ideas that scored highly on this model. The idea was then to pick the ideas that looked most promising after the speedruns, analyze them further, and eventually circulate these ideas and try to incubate projects related to them.
How did we choose the topics of the speedruns and which ones to publish?
We conducted a total of 13 speedruns (we had initially planned to conduct >20, but we paused to reassess as a result of the FTX collapse). The topics were chosen based on a combination of the following factors:
Score on the internal weighted factor model used to prioritize between the projects in our projects database.
Researcher interest.
Degree to which there was existing work on this project (that we knew about).
Urgency for internal decision-making.
We’re only publishing 3 speedruns. Where we’re not publishing a speedrun, it’s for one the following reasons:
Internally focused: The speedrun seemed mostly relevant to internal decision-making and did not seem like it would be useful to people outside of RP.
Developed into a larger project: The speedrun kicked off a larger research project (in some cases leading to other publishable outputs).
Time-consuming: The speedrun would have been particularly time-consuming to publish (e.g., because of containing large amounts of potentially infohazardous information, claims that could easily be misunderstood, or just being unusually early stage).
Non-public work: Work that for strategic or confidentiality reasons cannot be made public at this time.
Here is an overview of all of the speedrun topics (roughly sorted by cause area), whether we’re publishing them, and why:
Speedrun topic | Are we publishing this? If not, why not? |
Develop an affordable super PPE | Publishing |
Mass deployment of air sterilization techniques | Developed into a larger project |
Rethink Priorities runs coordination activities for the biosecurity community | Internally focused |
A quick ranking of all of the AI-related projects on our list | Time-consuming |
Create AI alignment prizes | Publishing |
Establish an AI ethics organization | Time-consuming |
Establish an AI auditing organization | Non-public work |
Establish an AI whistleblowing organization | Non-public work |
Infrastructure to support independent researchers | Developed into a larger project |
Research fit-testing/upskilling programs such as xERIs | Internally focused + Time-consuming |
Demonstrate the ability to rapidly scale food production in the case of nuclear winter | Publishing |
Create and distribute civilisation restart manuals | Time-consuming |
Consolidate an AI technical safety or AI governance hub | Time-consuming |
If you are doing (or considering doing) work related to one of the speedruns we did not publish, feel free to reach out to me (at marie at rethinkpriorities dot org) and we can talk about it in more detail.
Acknowledgements
This research is a project of Rethink Priorities. It was written by Marie Davidsen Buhl. Thanks to my colleagues Bill Anderson-Samways, Joe O’Brien, Linch Zhang, Max Raüker, Onni Aarne, Patrick Levermore, Peter Wildeford, and Renan Araujo for helpful feedback. If you like our work, please consider subscribing to our newsletter. You can explore our completed public work here.
- Speedrun: Develop an affordable super PPE by 7 Feb 2023 18:43 UTC; 101 points) (
- The Rethink Priorities Existential Security Team’s Strategy for 2023 by 8 May 2023 8:08 UTC; 92 points) (
- EA Organization Updates: February 2023 by 14 Feb 2023 21:39 UTC; 43 points) (
- Speedrun: Demonstrate the ability to rapidly scale food production in the case of nuclear winter by 13 Feb 2023 19:00 UTC; 39 points) (
- Speedrun: AI Alignment Prizes by 9 Feb 2023 11:55 UTC; 27 points) (
- EA & LW Forum Weekly Summary (6th − 19th Feb 2023) by 21 Feb 2023 0:26 UTC; 17 points) (
- EA & LW Forum Weekly Summary (6th − 19th Feb 2023) by 21 Feb 2023 0:26 UTC; 8 points) (LessWrong;
- 8 Feb 2023 14:03 UTC; 4 points) 's comment on Donation recommendations for xrisk + ai safety by (
This is a cool idea, thanks for doing this and publishing the results
Do you know of more writing on the possibility of AI auditing organizations? I am mostly interested in writing on the amount of x-risk reduction such an organization might provide as well as downsides to starting such an organization from the perspective of x-risk.