What Rethink Priorities General Longtermism Team Did in 2022, and Updates in Light of the Current Situation
Summary
Rethink Priorities’ General Longtermism team, led by me, has existed for just under a year. In this post, I summarize our work so far.
Our initial theory of change centered around:
Primarily, facilitating the creation of faster and better longtermist “megaprojects” (though in practice, we focused more on somewhat scalable longtermist projects).
Secondarily, improving strategy clarity about which “intermediate goals” longtermists should pursue (though in practice, we focused more opportunistically on miscellaneous high-impact research questions). (more)
We had ~5 Full-time Equivalents (FTE) on average (a total of 54.5 FTE months of research work up until the end of November) and spent ~$721,000. (more)
(Shareable[1]) Outputs and outcomes of our work this year include:
I encouraged the creation of and supported the Rethink Priorities’ Special Projects team, which provides fiscal sponsorship to external entrepreneurial projects.
Marie and Renan (based on prior work by Renan, Max, and me) made a simple model for prioritizing between longtermist projects and identifying ones that seemed especially promising to research further. (more)
Our fellows and research assistant (Emma, Jam, Joe, Max, Marie) completed 13 research “speedruns” (~10h shallow dives) into specific longtermist projects. (more)
Our fellows and research assistant (Emma, Jam, Joe, Max, Marie) completed further research on several longtermist projects, including air sterilization techniques, whistleblowing, the AI safety recruiting pipeline, and infrastructure to support independent researchers. (more)
Renan cofounded and ran Condor Camp, a project that aims to find and engage world-class talent in Brazil for longtermist causes while also field-building longtermism and supporting EA community building in the country. (more)
Ben cofounded and ran Pathfinder, a project to help mid-career professionals to find the highest impact work (more)
I initiated a founder search for multiple promising projects, including:
Shelters and other civilizational resilience work (resulting in recommending grants to Tereza Fildrova, who helped organize the SHELTER weekend, and someone else who is exploring his fit for work in this area). (more)
An early warning forecasting center (resulting in working with Alex D to explore founding a project in this area). (more)
Ben researched nanotechnology strategy and made a database of resources relevant to this area. (more)
Separately from my work at Rethink Priorities, I was a guest fund manager for EA Funds’ Long-Term Future Fund, and a regranter for the Future Fund. (more)
The recent changes to the EA funding situation have significantly affected our team’s strategy, in that megaprojects now seem less relevant, and in that new research questions might have become especially important in light of the FTX crash. (more) However, I still think it’s very plausible that we continue to focus on entrepreneurial longtermist projects as our main research area. We’re currently in the process of reorienting and setting our strategy for 2023. (more)
You can help our team by contributing ideas for highly impactful research projects, funding us, expressing interest in working with us, and giving feedback on the work and plans outlined in this post. (more)
Preamble
I’ve led the new General Longtermism team at Rethink Priorities for slightly under a year. Recent changes in the EA funding landscape have had a large impact on our work. I’ve decided that now is a good time to write a detailed summary of our work so far, as well as how recent events have affected our work.
The primary purpose of this article is to be informative to our potential future employees, funders, and collaborators. Secondarily, I believe somewhat in the value of transparency, and hope that this article can be useful to some set of the (EA Forum reading-) public. Tertiarily, I expect this article to be internally helpful for myself and the rest of the team to keep track of what we did, especially for future reviews.
Within this article, I mostly chose to take a “descriptive” rather than “evaluative” approach. That is, I attempted to be factual and describe what we did and sometimes how or why we did it, but not whether we did well or poorly, leaving the reader to judge for themselves. However, I think my opinions are sometimes helpful for understanding what we did, so I included them when appropriate. I do think public self-evaluations are often helpful (though they have a lot of potential for bias), but I nonetheless think a descriptive approach is more appropriate and useful here.
Both to maintain accountability for my decisions about the team and to maintain a single authorial voice for this article, I tried my best to frame decision points using “I” instead of “we.”
“I” in this article refers to me, Linch Zhang. Unless explicitly stated otherwise, all opinions in this article are mine and mine alone. They should not be expected to represent the opinions of the rest of the General Longtermism team, or the rest of Rethink Priorities overall, or those of any of our funders or collaborators.
Our Initial Theory of Change
Here’s what we wrote in our expression of interest form:
Our General Longtermism team is dedicated primarily to doing the research and other work necessary to facilitate faster and better creation of “longtermist megaprojects” – projects that we believe have a decent shot of reducing existential risk at scale (spending hundreds of millions of dollars per year). Secondarily, our team also works on improving strategic clarity about which “intermediate goals” for people aiming to improve the long-term future should vs. should not be pursued.
The ways we aim to positively impact the world include:
Generating, prioritizing among, and doing further research on specific “visions of the future” for longtermist megaprojects and other intermediate goals we might want the world to achieve 5-20 years from now.
Tracing backwards from such visions to see what actions we can take today to cause such longtermist megaprojects or intermediate goals to happen, for example, via recommending grants, leading further research, and (especially) incubating projects and organizations that can grow to become the longtermist megaprojects of the future.
Identifying people who may excel in work related to longtermism (including roles in research, policy, grantmaking, etc.), helping them test their fit for such roles, and helping them build career capital for such roles.
In practice, the first point is operationalized as something like the following three approaches:
Project-first approach: Identify ideas that look good in the abstract assuming an ideal founder is there. Do feasibility analyses to narrow it down to ~5-10 ideas. Attempt founder recruitment and pick the best founder-idea matches. (Main focus)
Founder-first approach: Make a list of great founders we know and match them to promising projects. Compare these matches to above. (Secondary focus)
Incubation approach: Find other people’s already created organizations and provide them operations support. (Secondary focus)
There are two ways in which this claimed theory of change is now outdated:
In light of the recent drop in the pool of EA funding (mainly due to the FTX crash but also due to tech stocks dropping significantly in value), I think the appetite for longtermist megaprojects has gone down immensely. Thus, to the extent we want to focus on longtermist entrepreneurial projects, the focus should more explicitly shift to smaller, highly cost-effective (rather than very scalable) projects.
Note that we had already de facto shifted to focusing on smaller projects before (and independently of) the changes to the funding situation, partly because we found that it was significantly easier to find cost-effective-seeming small projects than large projects and partly because of idiosyncratic factors like team member preferences.
However, in light of current events, we’re reconsidering if we should continue to focus on longtermist entrepreneurship projects at all. We are currently exploring other focus areas we might want to focus on (see “Next Steps” section).
Our explicit theory of change listed “longtermist megaprojects” as our primary focus and “strategic clarity around intermediate goals” as our secondary focus. However, it is more accurate to say that our de facto primary focus was “somewhat scalable entrepreneurial longtermist projects” and our de facto secondary focus was “opportunistically advancing projects and research questions that seemed particularly high-impact.”
For more concrete details of what I mean, see “Outputs and Outcomes of What We Did So Far”.
Inputs and Costs
From Jan 1, 2022 to Nov 30, 2022, we’ve had ~54.5 FTE-months of research work and have spent approximately $721,000 (~$542,000 direct costs + ~$179,000 operations overhead). Most of our costs are payroll. We have enough runway (from non-FTX sources) until the end of 2023.
In terms of human capital costs, we had an average of ~5 FTEs this year (4.96 in our internal accounting[2]), with substantial month-to-month variance. We currently have seven people (four permanent research staff and three temporary research fellows).
I, Linch Zhang, (Research Manager) have been on the General Longtermism team since January 2022 (and at RP since October 2020). Renan Araujo(Researcher) and Ben Snodin (Senior Researcher) joined in February 2022. Max Rauker (Research Fellow) joined in February 2022 and left in May 2022 to work on AI Governance. Marie Buhl (Research Assistant) joined in August 2022. Emma Williamson, Jam Kraprayoon, and Joe O’Brien(Research Fellows) joined in September 2022.
I can provide an org chart or graphical timeline of hires if readers find them useful, though my guess is that it won’t be.
Outputs and Outcomes of What We Did So Far
Below are outputs and outcomes of the most salient work we did this year (up to ~EOM 2022 October). Note that I frame things as “outputs and outcomes″ rather than “impact,” as I believe this is a more accurate representation of the underlying reality. If we’re positively impactful, approximately all of the actual impact of our work (measured in lives saved or fractional dooms averted) will a) hit after 2022, and b) will require substantial further work (from ourselves or others) to come to fruition.
This list is not exhaustive, and systematically does not include some of the private work (e.g., consulting for EA organizations, or research on semi-sensitive topics) we’ve done. However, I do not believe the private work we’ve done is unusually impactful or very time consuming relative to our public work, so the non-private summary here is probably a reasonable summary of both our work so far and our time expenditures.
Starting the SP team and evaluating applications for fiscal sponsorship
I worked to encourage Rethink Priorities to start a Special Projects team. The basic idea is that there are many wonderful, highly impactful ideas and projects (both existing and potential) in (longtermist) EA. However, many of them were operationally bottlenecked, and RP can potentially help unblock such bottlenecks. My main contribution was just saying that such a team should be created. The actual structure, setup, and onboarding came from Abraham, Rethink’s exec team, and of course the wonderful members of the Special Projects team themselves!
To quote the Special Project Program’s announcement post:
The team’s Acting Director Carolyn Footitt is leading this work with Associates María De la Lama and Cristina Schmidt Ibáñez. Their projects generally fall under two areas:
Incubated (internal)–In addition to our research agenda, RP is incubating direct work and other projects that advance our mission. The SP Team works closely with RP staff to launch these initiatives. Once the incubation period for each respective initiative ends, the project will either conclude or spin-off to become an independent organization.
Fiscally sponsored (external)–The SP Team is also providing fee-based fiscal sponsorship and support to projects that are managed by individuals outside of RP. Within this model, the project’s founders maintain autonomy and decision-making authority while we provide them with operational and fiduciary oversight.
The theory of change for the incubated internal projects is mostly, but not entirely, tied to the research work of my team. My team also helps with evaluating which projects ought to be fiscally sponsored.
If you have a project or organization that is highly impactful for the long-term future that you think can benefit from RP ops, please feel free to apply here!
Megaprojects Prioritization Model
We’ve worked on an initial rough prioritization model for potential longtermist megaprojects. The idea is that there are many proposed ideas for megaprojects (and many more ideas in “idea space” that aren’t proposed), but we don’t have enough time to carefully evaluate each of them, so it’d be good to have a model to help with triaging further evaluations. This project turned out to be pretty hard to do well, with initial work led by me, then Max and Renan, and finally Marie and Renan, after Marie joined in August. Now, I think we’re close to having a model that’s serviceable enough to be externally relevant, but unfortunately external conditions have changed enough to decrease their broad usefulness. However, we will likely still publish some version of the model in either January or February.
The actual approach we took was pretty iterative and messy, but roughly, a clean retelling of our process (led by Marie) looked like:
Idea generation: Find pre-existing lists of ideas, have interviews/conversations with experts for idea elicitation, sometimes generate our own ideas. We probably got a few hundred ideas total from this process.
Cleanup/deduping: Removing duplicates, clearly low-impact ideas, ideas that are too vague or incoherent, ideas that aren’t very scalable, etc. We ended up with a list of slightly more than 100 ideas.
Creating a very simple weighted factor model: We came up with a list of factors to consider (impact at scale, tractability, downside risk, our team’s comparative advantage, cost) and relative weightings.
Scoring: Marie and Renan, and to a lesser extent other teammates, quickly scored megaproject ideas based on this model.
Further prioritization, research and iteration: We did research speedruns, other deep dives, etc., based on which projects in the model scored highly, had high variance in the scores given by different people, or otherwise appeared interesting.
Research Speedruns
A large factor in our prioritization process was “research speedruns,” ~10h of focused work of shallow dives into specific megaproject ideas. This research and subsequent discussions (both internally and with external experts) helped provide clarity into which ideas we should focus subsequent research and other work on.
We originally planned to write 20 research speedruns before revisiting this. We ended up with 13 before the FTX crash. We currently plan to release several of them over the next few months on the Forum, prioritizing research speedruns that are positively impactful to publish. Our primary metrics for whether to publish are:
Direct impact: Do we expect publishing this research speedrun to cause good things to happen? E.g., inspire further research in an important area, or connect founders and funders of promising projects, etc. Notably, some of our speedruns were internally focused (e.g., which cause areas to focus and how) and are thus less relevant for others to read.
Interestingness/Indirect Impact/Cultural effects: Do we expect our publications to contribute novel ideas to the conversation, or present underrated ideas or ways of thinking among EAs?
Costs and downside risks: Do we envision some speedruns as unusually costly in our time to translate to an EAF format? Do we envision the publication of some speedruns to be net negative (for reasons other than opportunity cost, e.g., because they’re substantially more rushed and more likely to have mistakes than usual for RP research reports)?
Deep Dives
Research Fellows at RP often follow up on their research speedruns with more in-depth investigations and planning/setup for potential pilots of different projects that we think are good for reducing existential risk or otherwise making the future go well. Note that the initial pitch for each of these ideas, and a large fraction of the details and content, are not original to us.
We may publish more research or other results for some of these deep dives in the coming months.
Note that not all deep dives are included below; two other deep dives are not currently ready for summarization on the EA Forum, but I’m tentatively optimistic that we’ll make considerable progress on them early next year.
Mass deployment of air sterilization technologies to reduce indoor transmission of airborne pathogens
Research Fellow Jam Kraprayoon investigated the case for mass deployment of air sterilization technologies, particularly Ultraviolet-C (UVC). He thought it was reasonably convincing, and is investigating how best to accelerate research, development, and deployment of such technologies. He worked with 1Day Sooner to release a report on indoor air quality and existential biorisk, with recommendations for funders interested in contributing to this area. Alongside this, they submitted a public comment to the EPA’s RFI regarding indoor air quality. He also worked with RP’s Survey Team to run a series of surveys looking at US public attitudes towards UVC technologies for reducing illness and pandemic risk.
AI Safety Recruiting Pipeline
In April 2022, former GLT Research Fellow Max Räuker studied the current state of the AI safety recruiting pipeline and got feedback from several AI safety researchers and community builders about potential projects to increase the talent pipeline in AI safety. This resulted in two scalable ideas for AI safety talent recruitment that tentatively seem like they could be tractably piloted. Max decided to focus on AI governance/strategy research projects after his fellowship was over, and we do not currently have people championing any pilots here. However, I continue to be fairly excited about projects in this direction, and may be interested in either a) reprioritizing this in 2023 or b) having someone external champion pilots here. While they’re now slightly outdated, please feel free to message me if you’d be interested in us sharing non-public reports, drafts, or notes with you.
Infrastructure to Support Independent Researchers
Research Assistant Marie Buhl investigated various potential projects to support independent researchers. She considered and evaluated various different models, including (a) an open-ended fellowship to enable early-career exploration or mid-career pivots, and (b) an organization providing institutional affiliation and ongoing support to people on independent research grants. Overall, she thought this was a promising space for longtermist intervention. The investigation is ongoing, and Marie is currently talking to various stakeholders. She’s keen to be contacted by independent researchers, potential project founders, and other people with an interest in the area, at marie [at] rethinkpriorities [dot] org.
Internally Incubated Projects
Condor Camp
Researcher Renan Araujo’s main project has been to lead Condor Camp, a team that aims to find and engage world-class talent in Brazil for longtermist causes, while also field-building longtermism and supporting EA community building in the country.
Condor Camp ran a pilot 10-day workshop for 13 highly-talented Brazilian university students in Cusco, Peru, between July 29 and August 3, 2022. Overall, the pilot seems to have gone well in several ways. They managed to establish a solid brand among Brazilian top students, consolidate a competent team, mostly satisfy camp participants, RP staff, and guest speakers, and ensure participants’ learning of core concepts. Additionally, many camp participants have counterfactually engaged with impactful activities, such as founding the first EA university group in Brazil at the University of São Paulo (recently accepted into UGAP), the first AI safety university group of the country (and getting actively involved with beginner-level AIS research projects), and were final-round applicants to several EA-related opportunities.
Condor Camp is currently seeking funding. Plans for 2023 include AIS hackathons, the first EA intro virtual program in Portuguese, sessions at EAGx Latin America, workshops with students of international STEM olympiads, and another iteration of the Condor Camp workshop. If you’re interested in supporting the project or learning more about it, feel free to reach out directly to Renan at renan [at] rethinkpriorities [dot] org.
Pathfinder
Ben co-founded a new project with Claire Boine called EA Pathfinder, which aims to help professionals motivated by reason and evidence to find the highest impact work. This occupied most of Ben’s time from April to September 2022, after which Ben stepped back from the project for personal reasons.
EA Pathfinder is currently piloting different programs for professionals with five or more years of experience who want to increase their EA impact. They are monitoring the impact of each program.
EA Pathfinder’s current programs are:
1) Career advising: up to five sessions per person
2) Coaching to address certain issues like imposter syndrome
3) Research and project mentorship for individuals who want to prove themselves in an EA cause area
4) Peer-support cohort matching
5) Matchmaking with potential employers
EA Pathfinder has served or is currently serving a total of 110 participants.
EA Pathfinder is also organizing a community called the High Impact Talent Ecosystem (HITE), which aims to improve coordination between the people and organizations working in recruiting and meta services supporting talents in EA.
In 2023, EA Pathfinder is aiming to serve 166 more individuals, produce online resources, create an intergenerational mentorship program, and investigate other potential ideas such as a two-year residence program.
EA Pathfinder is actively looking for funding. If you are interested in supporting them, please contact claire [at] eapathfinder [dot] org.
Founder search
I think one of the most important determinants of the success of longtermist projects is having great founders and a great founder-project fit. This is even more important for projects where our team (and RP more broadly) lacks some of the key skills or expertise needed for the project to go well, such that our ability to run our own pilots or advise on the project is limited. To that end, we (primarily me) experimented with finding founders for a number of high-impact projects.
Shelters and Other Civilizational Resilience work
For the last few years, I’ve been excited about the potential of civilizational shelters (aka refuges, aka bunkers) to help humanity survive and navigate a certain subsection of GCRs. This year, I tried a variety of directions to accelerate such work. Broadly, I think noticeably less progress has been made on shelters or other civilizational resilience work this year than I hoped or expected. However, there is more work and success than seems apparent from public outputs.
From my end, I wrote some private initial scoping document(s) for civilizational shelters and resilience, reviewed docs from others, and worked on founder search and coordination for a potential project lead.
I also recommended several grants. I recommended a grant to Tereza Fildrova, an architect-in-training, to explore potential careers in EA as well as how to advance shelters and biosafety or civilizational resilience. This culminated in SHELTER Weekend. She also has some cool research that I hope will be published soon.
I also recommended a grant to an anonymous potential founder, who has considerable engineering experience, to explore his fit for pandemic preparedness shelters or civilizational resilience work.
I believe there’s a handful of other civilizational resilience projects in various stages of completion (though all fairly early-stage in absolute terms).
Unfortunately, I do not believe I’ve found teams who are stellar fits for leading or co-leading the full shelters project. I am uncertain whether this is due to a lack of people with relevant experience and skill sets in our community, a lack of sufficient prioritization or time on my end, or just because I overestimate the difficulty of such a project.
In some ways this is fortunate: My current guess is that while it continues to be useful to do the preliminary research and other work to make shelters a shovel-ready project, doing full shelters construction is currently below LT EA’s current funding bar. This comes from two updates: 1) relatively fast AI timelines, and 2) longtermist EA having significantly less funding (~ 2 doublings?) than I previously thought at the end of last year.[3]
I believe that there are a number of non-shelter civilizational resilience projects that are cheaper and potentially worth the costs. However, I feel like I’ve done a mediocre-to-poor job of coordinating/ushering in their existence in the past, and I probably will continue to be poor in making them happen in the future. Fortunately, a) I do not believe most work is reliant on me, and b) I’m hopeful that other people can pick up the slack here.
I know some subset of EAs think of me as one of the primary people thinking about and coordinating work around civilizational resilience for worst-case non-AI catastrophes. To the extent that this belief is true, please note that I currently think I am unlikely to do much work on this front going forwards.
I hope more work on civilizational resilience will be published in the coming months (this is not currently a major priority for me).
Early Warning Forecasting Center: Farpoint
For some time, I’ve been excited about the idea of an Early Warning Forecasting Center, a focused forecasting agency for detecting future pivotal moments and helping to give EAs (and others) additional warning for such pivotal moments so we can respond to them quickly. I wrote a forum post about this idea in March this year. Alex D responded to it with a lot of enthusiasm.
I think in many ways he’s the ideal founder for such a project (plugged into the biosecurity space, experience with epidemic & cyber threat intelligence, forecasting experience, epidemiology, generally experienced including in management, well-read, etc.). We’ve been meeting weekly for the last some months fleshing out this idea and what would it mean for Alex to found a new project focused on early warning for pivotal moments that are potentially relevant to existential security.
We’re currently excited about this next project (tentatively named “Farpoint”), with Alex as head and me in a light advising capacity.
We’re currently seeking funding.
I consider this tentatively a positive example in founder search for longtermist entrepreneurial projects, though a) things are still in very early stages, and b) the replicability is of course unclear.
Generalized System for Founder Search
We (I) also attempted to work on creating a more generalized system for founder search and founder-project matching. Nothing very interesting has really been done here, but if we were to continue focusing on research on entrepreneurial longtermist projects into 2023, hopefully we’ll have more interesting work to show on that front.
Nanotechnology strategy
Ben Snodin’s (Senior Researcher) first project with us after joining in February 2022 was finishing up work he’d done at FHI for scoping out nanotechnology strategy as a cause area. He wrote his thoughts on advanced nanotechnology strategy and risks here.
He also worked with Marie Buhl in setting up a database of resources relevant to further work in nanotechnology strategy.
Grants
I had significant grantmaking responsibilities throughout 2022. While this was not officially a part of my RP work, my research at RP was helpful for me to form models relevant to making good grants, and questions that arose during grantmaking were also moderately helpful in informing which specific research directions or questions we felt were important and action-relevant to pursue.
In particular, I’ve been serving as a guest funds manager for EA Funds’ Long-Term Future Fund since January 2022. I’ve devoted significant time to making good grants there. You can read some of my reflections here.
I also worked as a regranter for the FTX Future Fund. We were not supposed to “out” ourselves, originally for status/CoI reasons, but of course this is now much less relevant. I took the regranting responsibilities quite seriously, and recommended 16 different grants to 10 different individuals and groups. I was pretty proud of the grants, and think that they’ve helped unlock quite impactful work. Unfortunately, in light of current events, I think the moral and legal status of those grants are very uncertain, and I’ve unintentionally caused some large work/life changes and probably saddled my regrantees with nontrivial hassle. I feel rather sorry about this.
I’ve tried reaching out to all of my regrantees, and tried to help them emotionally and logistically whenever possible (sidenote: if you’re in this group and haven’t heard from me or haven’t heard enough, please reach out!). I am truly hopeful that things will work out for them and that the projects that are making a large positive impact will continue to do so. I will do my best to help them maximize their impact.
Miscellaneous
We also worked on a number of miscellaneous projects. Some examples: we (especially I) worked on various coordination activities with other effective altruism organizations. Ben Snodin and Renan Araujo led around ten talks and workshops at EA Global and EAGx events, as well as a few more local events. Renan and I helped out on some scattershot high-leverage community-building activities when relevant, including for Boston, Colombia, and Mexico. Marie and I also worked on an initial decomposition of important questions in longtermist nuclear strategy which we sent to a funder.
How Recent Changes in the Funding Landscape Affected Our Team
Recent changes in the EA funding landscape should probably have large ripple effects on our team’s strategy. I’m primarily thinking about a) the large drop in tech stocks across the board since the beginning of this year and b) the FTX crash. While I have not evaluated this rigorously, between the two events, I expect total EA funds since the beginning of this year to have roughly halved twice (a 4x decline). Relative to my previous estimate when I decided on team strategy roughly in Jan-Feb of this year, this obviously means that megaproject implementations should be a lower priority for EA (except maybe for a few of the most robust or highest impact projects). In turn, this puts a relative damper on excitement for further research into megaprojects, as well as capital-intensive pilots.
In recent months, we’ve also received some negative feedback from other sources for unrelated reasons on the “megaproject” framing, and have internally shifted somewhat away from focusing on researching capital-intensive projects before the FTX crash.
Another way these events could affect our team’s strategy is that there might be specific new research areas or other needs that are especially important and pressing in light of the FTX crash, both in terms of a) new holes in the ecosystem, and b) revealing structural weaknesses that we may wish to identify good ways to fix.
A specific subpoint here is grantmaking: While it was never a central focus in our team strategy, grantmaking played a nontrivial role in our general thinking and theory of change. I expect my grantmaking responsibilities to significantly change in the next year (most likely towards less grantmaking than in 2022, but possibly towards significantly more).
Finally, a reason these events may influence our decision-making is that they provide an opportunity for us to evaluate our progress and reconsider our future plan. I hope this post and resultant discussion can serve as one of the steps in this process of reassessing our strategy.
We have not decided on an updated team strategy (see the “Next Steps″ section for our process for updating this). I think it is very plausible that we continue to focus on entrepreneurial longtermist projects as our team’s main research area. I think it is also plausible, though currently unlikely, that we continue to do research into megaprojects, albeit in an updated form. But it is also plausible that we pivot to work in another area.
Next Steps
Reorienting
I think the team in general, and me in particular, is trying to reorient towards the current situation.[4] We are trying to understand what happened and how it’s relevant to us. I consider this post among the first steps we’re taking in this foray. We’re also meeting and talking to stakeholders to understand how we fit in the current ecosystem, recording our own thoughts, etc. At some point, reorienting will entail fewer mostly factual notes (like this post) and switch to being evaluative: are we happy with what we did in 2022? Why or why not? What are our biggest successes? How exactly have we fallen short?
Strategy/Figure out What to Do in 2023
Next, we are going to figure out what to do in 2023. We are exploring some ideas now, both internally and via talking to advisors. We may also canvass the community later. I expect the entire team to be involved in this decision, but I’ll likely make the final call.
Currently, the two most plausible strategies to beat are: 1) continue to do research into new entrepreneurial longtermist projects and 2) fold into RP’s AI Governance and Strategy team and focus on strategy research that can reduce the probability of AI doom.
Publication
Since right now we are more in the “meta” and the “strategizing” phase that’s slower and more deliberative, it seems like a pretty good time for us to publish a reasonably large fraction of our work, in particular pieces that are interesting and/or hopefully positively impactful for the community to read.
I expect a number of pieces to be published on this forum in early 2023. Stay tuned!
How You Can Help
Here are some ways you can contribute, if you’re interested:
Ideas/other intellectual contributions: If you have great ideas for highly impactful research projects or directions for us to pursue next, I’d be happy to receive pitches! As we publish more work, comments/red-teaming/technical corrections would also be highly appreciated.
Funding: We can productively use funding to increase our runway and do useful research. Unfortunately, as we have not decided on a research strategy, it’s hard to say exactly what your money will buy. You may wish to bet on us given our track record and reasoning process such as displayed in this post. However, I will also understand if you’d rather wait for a more definitive research strategy before offering funding.
Work with us, maybe? If this post or past work by us is interesting to you, please consider filling out our Expression of Interest form! We are not currently hiring, but if our research strategy ends up being usefully scalable and we have sufficient funding, we’d potentially love to see you on board!
I will also appreciate feedback on this post; in particular, let us know if there are aspects of our work you’d be especially keen to hear more about or see more of.
- ^
This was a roughly comprehensive list the outputs and outcomes that seem easy enough to concisely explain and can be mentioned publicly. That said, some of the listed work isn’t yet public or won’t be public, but in those cases you can let us know if you’d be interested in more info or seeing documents. We also did some private or harder to explain work which we do not list here.
- ^
Note that our internal accounting for “research work” includes fractions of executive team time, including research, management, and communications work that they do.
- ^
Note that this is just my own light independent impression that is not backed by talking to large funders or any detailed modeling.
- ^
One simplified model for thinking about how to plan well is Observe-Orient-Decide-Act, where I think of the “reorienting” step as akin to “observe + orient,” the strategy step as “decide,” and what we end up doing after the strategizing step as “act” (not listed here).
- Longtermism Fund: December 2022 Grants Report by 21 Dec 2022 4:14 UTC; 126 points) (
- Speedrun: Develop an affordable super PPE by 7 Feb 2023 18:43 UTC; 101 points) (
- The Rethink Priorities Existential Security Team’s Strategy for 2023 by 8 May 2023 8:08 UTC; 92 points) (
- Rethink Priorities is inviting expressions of interest for (co)leading a longtermist project/organization incubator by 13 Feb 2023 11:17 UTC; 74 points) (
- Scalable longtermist projects: Speedrun series – Introduction by 7 Feb 2023 18:43 UTC; 63 points) (
- EA Organization Updates: January 2023 by 16 Jan 2023 14:58 UTC; 62 points) (
- EA & LW Forums Weekly Summary (12th Dec − 18th Dec 22′) by 20 Dec 2022 9:49 UTC; 27 points) (
- 19 May 2023 3:53 UTC; 22 points) 's comment on The Rethink Priorities Existential Security Team’s Strategy for 2023 by (
- EA & LW Forums Weekly Summary (12th Dec − 18th Dec 22′) by 20 Dec 2022 9:49 UTC; 10 points) (LessWrong;
- 8 Feb 2023 14:03 UTC; 4 points) 's comment on Donation recommendations for xrisk + ai safety by (
Thanks for writing this! I find it a really great resource and am glad to see the stuff you/RP are working on. I’d make a minor suggestion to make the topics of all the speedruns public even if you don’t end up releasing them (or default making the topics public and then give a reason for the ones you don’t). I think it’s of general interest (or like at least I’m curious) and also I’d imagine you could run into a sort of sampling problem if the subject of the unpromising or negative ones were just never revealed.
Thanks for your kind words! Re: the last point, this makes sense to me. I’ll talk to the team but I think we’ll default to sharing the topics even if we don’t release them, barring very good reasons not to, like you mentioned.
I attended an AI Safety Field Building conference where we spent like an hour mapping out the space. It would have been so much quicker and saved a lot of people’s time if a map had already existed. Of course the problem is that it’s very hard to keep such maps up to date. I would be enthusiastic if Rethink Priorities were to take responsibility for this and other high-level overview tasks including:
Maintaining an up-to-date list of the various alignment research organisations, their research directions and key assumptions about the problem.
Maintaining a list of demand for various kinds of role: research leads, researchers, research engineers, ops, ect.
Mapping out the pipeline such as how many opportunities for programs at different levels
Aggregating people’s opinions about the current state of the field and/or major debates
This seems helpful, though I’d guess another team that’s in more frequent contact with AI safety orgs could do this for significantly lower cost, since they’ll be starting off with more of the needed info and contacts.
Agreed! Other groups will be better placed. But I’m not categorically ruling this out: if nobody else appears to be on track for doing this when we’re next in prioritization mode, we might revisit this issue and see whether it makes sense to prioritize it anyway.
FWIW, some resources I find useful in mapping just bullet (1):
AIS.world (along with the other AIS Support infrastructure like AIS.training and this thing)
What is everyone doing (mid-2022, but maybe the most ambitious in scope that I’ve seen)
I’ve found this a useful meta-list of orgs at some points, but it’s more than a year stale
I haven’t looked at AI Watch as much
I agree it’s still a messy space. Although I worry about this failure mode for anyone thinking about adding new standards.
There’s this project that a few people have been working on towards making an overview in Obsidian that seems to be in this vein as well (see the comment, too): aisi.ai/?idea=75
I think this is not exactly what you had in mind, but seemed tangentially relevant so I am sharing in case people have not come across it: https://futureoflife.org/landscape/
This is from my understanding mostly about research and the nodes link to various papers and not organizations working on the various topics.
Really grateful for your transparency here, Linch!