How do you decide on which research areas to focus on and, relatedly, how do you decide how to allocate money to them?
We do broadly aim to maximize the cost-effectiveness of our research work and so we focus on allocating money to opportunities that we think are most cost-effective on the margin.
Given that, it may be surprising that we work in multiple cause areas, but we face some interesting constraints and considerations:
There is significant uncertainty about which priority area is most impactful. The general approach to RP has been that we can scale up multiple high-quality research teams in a variety of cause areas easier than we can figure out which cause area we ought to prioritize. Though we recently hired a Worldview Investigations Team to work a lot more on the broader question of how to allocate an EA portfolio. We also are investing a lot more in our own impact assessment. Together we hope that these will give us more insights into how to allocate our work going forward.
There may be diminishing returns to RP focusing on any one priority area.
A large amount of resources are not fungible across these different areas. The marginal opportunity cost to taking restricted funding is pretty low as we cannot easily allocate these resources to other areas, even if we were convinced they were higher impact.
Work on any single area might gain from our working on multiple areas as teams have much greater access to centralized resources, staff, funding, and productive oversight than what they would receive if the team existed independently and solely focused on that priority. Relationships within an area could poten be useful for work in another area.
Working across different priorities allows the organization to build capacity, reputation, and relations, and maintain option value for the future.
Thanks for this. I notice that all of these reasons are points in favor of working on multiple causes and seem to neglect considerations that would go in the other direction. And clearly, you take this considerations seriously too (e.g., scale and urgency) as you recently decided to focus exclusively on AI within the longtermism team now.
Most organizations within EA are relatively small (<20). Why do you think that’s the case and why is RP different?
I’m not exactly sure and I think you’d have to ask some other smaller organizations. My best guess is that scaling organizations is genuinely hard and risky, and I can understand other organizations may feel that they work best and are more comfortable with being small. I think RP has been different by:
Working in multiple different cause areas lets us tap into multiple different funding sources, thus increasing the amount of money we would take. It also increased the amount of work we wanted to do and the amount of people we wanted to hire.
By being 100% remote-first from the beginning, we had a much larger talent pool to tap into. I think we’ve also been more willing to take chances on more junior-level researchers which has also broadened our talent pool. This allowed us to hire more.
I think just a general willingness and aspiration to be a big research organization and take on this risk, rather than intentionally go it slow.
How do you decide whether something belongs to the longtermism department (i.e., whether it’ll affect the long-term future)?
We haven’t had to make too many fine-grained decisions, so it hasn’t been something that has come up enough to merit a clear decision procedure. I think the trickiest decision was what to do with research aimed at understanding and mitigating the negative effects of climate change. The main considerations were questions like “how do our stakeholders classify this work” and “what is the probability of this issue leading to human extinction within the century” and both of those considerations led to climate change work falling into our “global health and development” portfolio.
This year we’ve made an intentional decision to focus nearly all our longtermist work on AI due to our assessment of AI risk as both unusually large and urgent, even among other existential risks. We will revisit this decision in future years and to be clear this does not mean that we think other people shouldn’t work on non-AI x-risk or non-xrisk longtermism.
What do you focus on within civilizational resilience?
This year we’ve made an intentional decision to focus nearly all our longtermist work on AI due to our assessment of AI risk as both unusually large and urgent, even among other existential risks. We will revisit this decision in future years and to be clear this does not mean that we think other people shouldn’t work on non-AI x-risk or longtermism-work not oriented towards existential risk reduction. But that does mean we don’t have any current work on civilizational resilience right now.
That being said, we do have some work on this in the past:
Hi Peter, thanks for your work. I have several questions:
Most organizations within EA are relatively small (<20). Why do you think that’s the case and why is RP different?
How do you decide on which research areas to focus on and, relatedly, how do you decide how to allocate money to them?
What do you focus on within civilizational resilience?
How do you decide whether something belongs to the longtermism department (i.e., whether it’ll affect the long-term future)?
We do broadly aim to maximize the cost-effectiveness of our research work and so we focus on allocating money to opportunities that we think are most cost-effective on the margin.
Given that, it may be surprising that we work in multiple cause areas, but we face some interesting constraints and considerations:
There is significant uncertainty about which priority area is most impactful. The general approach to RP has been that we can scale up multiple high-quality research teams in a variety of cause areas easier than we can figure out which cause area we ought to prioritize. Though we recently hired a Worldview Investigations Team to work a lot more on the broader question of how to allocate an EA portfolio. We also are investing a lot more in our own impact assessment. Together we hope that these will give us more insights into how to allocate our work going forward.
There may be diminishing returns to RP focusing on any one priority area.
A large amount of resources are not fungible across these different areas. The marginal opportunity cost to taking restricted funding is pretty low as we cannot easily allocate these resources to other areas, even if we were convinced they were higher impact.
Work on any single area might gain from our working on multiple areas as teams have much greater access to centralized resources, staff, funding, and productive oversight than what they would receive if the team existed independently and solely focused on that priority. Relationships within an area could poten be useful for work in another area.
Working across different priorities allows the organization to build capacity, reputation, and relations, and maintain option value for the future.
Thanks for this. I notice that all of these reasons are points in favor of working on multiple causes and seem to neglect considerations that would go in the other direction. And clearly, you take this considerations seriously too (e.g., scale and urgency) as you recently decided to focus exclusively on AI within the longtermism team now.
I’m not exactly sure and I think you’d have to ask some other smaller organizations. My best guess is that scaling organizations is genuinely hard and risky, and I can understand other organizations may feel that they work best and are more comfortable with being small. I think RP has been different by:
Working in multiple different cause areas lets us tap into multiple different funding sources, thus increasing the amount of money we would take. It also increased the amount of work we wanted to do and the amount of people we wanted to hire.
By being 100% remote-first from the beginning, we had a much larger talent pool to tap into. I think we’ve also been more willing to take chances on more junior-level researchers which has also broadened our talent pool. This allowed us to hire more.
I think just a general willingness and aspiration to be a big research organization and take on this risk, rather than intentionally go it slow.
This makes sense. Do you have any explicated intentions for how big you want to get?
We haven’t had to make too many fine-grained decisions, so it hasn’t been something that has come up enough to merit a clear decision procedure. I think the trickiest decision was what to do with research aimed at understanding and mitigating the negative effects of climate change. The main considerations were questions like “how do our stakeholders classify this work” and “what is the probability of this issue leading to human extinction within the century” and both of those considerations led to climate change work falling into our “global health and development” portfolio.
This year we’ve made an intentional decision to focus nearly all our longtermist work on AI due to our assessment of AI risk as both unusually large and urgent, even among other existential risks. We will revisit this decision in future years and to be clear this does not mean that we think other people shouldn’t work on non-AI x-risk or non-xrisk longtermism.
This year we’ve made an intentional decision to focus nearly all our longtermist work on AI due to our assessment of AI risk as both unusually large and urgent, even among other existential risks. We will revisit this decision in future years and to be clear this does not mean that we think other people shouldn’t work on non-AI x-risk or longtermism-work not oriented towards existential risk reduction. But that does mean we don’t have any current work on civilizational resilience right now.
That being said, we do have some work on this in the past:
Linch did a decent amount of research and coordination work around exploring civilizational refuges but RP is no longer working on this project.
Jam has previously done work on far-UVC, for example by contributing to “Air Safety to Combat Global Catastrophic Biorisks”.
We co-supported Luisa in writing “What is the likelihood that civilizational collapse would directly lead to human extinction (within decades)?” while she was a researcher at both Rethink Priorities and Forethought Foundation.