I think there is an issue with the community-wide allocation of effort. A very large proportion of our effort goes into preparation work, setting the community up for future successes; and very little goes into external-focused actions which would be good even if the community disappeared. I’ll talk about why I think this is a problem (even though I love preparation work), and what types of things I hope to see more of.
Phase 1 and Phase 2
A general strategy for doing good things:
Phase 1: acquire resources and work out what to do
Phase 2: spend down the resources to do things
Note that Phase 2 is the bit where things actually happen. Phase 1 is a necessary step, but on its own it has no impact: it is just preparation for Phase 2.
To understand if something is Phase 2 for the longtermist EA community, we could ask “if the entire community disappeared, would the effects still be good for the world?”. For things which are about acquiring resources — raising money, recruiting people, or gaining influence — the answer is no. For much of the research that the community does, the path to impact is either by using the research to gain more influence, or having the research inform future longtermist EA work — so the answer is again no. However, writing an AI alignment textbook would be useful to the world even absent our communities, so would be Phase 2. (Some activities live in a grey area —for example, increasing scope sensitivity or concern for existential risk across broad parts of society.)
It makes sense to frontload our Phase 1 activities, but we do want to also do Phase 2 in parallel for several reasons:
Doing enough Phase 2 work helps to ground our Phase 1 work by ensuring that it’s targeted at making the Phase 2 stuff go well
Moreover we can benefit from the better feedback loops Phase 2 usually has
We can’t pivot (orgs, careers) instantly between different activities
A certain amount of Phase 2 helps bring in people who are attracted by demonstrated wins
We don’t know when the deadline for crucial work is (and some of the best opportunities may only be available early) so we want a portfolio across time
So the picture should look something like this:
I’m worried that it does not. We aren’t actually doing much Phase 2 work, and we also aren’t spending that much time thinking about what Phase 2 work to be doing. (Although I think we’re slowly improving on these dimensions.)
Problem: We are not doing Phase 2 Work
When we look at the current longtermist portfolio, there’s very little Phase 2 work[1]. A large majority of our effort is going into acquiring more resources (e.g. campus outreach, or writing books), or into working out what the-type-of-people-who-listen-to-us should do (e.g. global priorities research).
This is what we could call an inaction trap. As a community we’re preparing but not acting. (This is a relative of the meta-trap, but we have a whole bunch of object-level work e.g. on AI.)
How does AI alignment work fit in?
Almost all AI alignment research is Phase 1 — on reasonable timescales it’s aiming to produce insights about what future alignment researchers should investigate (or to gain influence for the researchers), rather than producing things that would leave the world in a better position even if the community walked away.
But if AI alignment is the crucial work we need to be doing, and it’s almost all Phase 1, could this undermine the claim that we should increase our focus on Phase 2 work?
I think not, for two reasons:
Lots of people are not well suited to AI alignment work, but there are lots of things that they could productively be working on, even if AI is the major determinant of the future (see below)
Even within AI alignment, I think an increased focus on “how does this end up helping?” could make Phase 1 work more grounded and less likely to accidentally be useless
Problem: We don’t really know what Phase 2 work to do
This may be a surprising statement. Culturally, EA encourages a lot of attention on what actions are good to take, and individuals talk about this all the time. But I think a large majority of the discussion is about relatively marginal actions — what job should an individual take; what project should an org start. And these discussions often relate mostly to Phase 1 goals, e.g. How can we get more people involved? Which people? What questions do we need to understand better? Which path will be better for learning, or for career capital?
It’s still relatively rare to have discussions which directly assess different types of Phase 2 work that we could embark on (either today or later). And while there is a lot of research which has some bearing on assessing Phase 2 work, a large majority of that research is trying to be foundational, or to provide helpful background information.
Nonetheless, working out what to actually do is perhaps the central question of longtermism. I think what we could call Phase 1.5 work — developing concrete plans for Phase 2 work and debating their merits — deserves a good fraction of our top research talent. My sense is that we’re still significantly undershooting on this.[2]
Engaging in this will be hard, and we’ll make lots of mistakes. I certainly see the appeal of keeping to the foundational work where you don’t need to stick your neck out: it seems more robust to gradually build the edifice of knowledge we can have high confidence in, or grow the community of people trying to answer these questions. But I think that we’ll get to better answers faster if we keep on making serious attempts to actually answer the question.
Towards virtuous cycles?
Since Phase 1.5 and Phase 2 work are complements, when we’re underinvested in both the marginal analsysis can suggest that neither is that worthwhile — why implement ideas that suck? or why invest in coming up with better ideas if nobody will implement them? But as we get more analysis of what Phase 2 work is needed, it should be easier for people to actually try these ideas out. And as we get people diving into implementation, it should get more people thinking more carefully about what Phase 2 work is actually helpful.
Plus, hopefully the Phase 2 work will actually just make the world better, which is kind of the whole thing we’re trying to do. And better yet, the more we do it, the more we can honestly convey to people that this is the thing we care about and are actually doing.
To be clear, I still think that Phase 1 activity is great. I think that it’s correct that longtermist EA has made it a major focus for now — far more than it would attract in many domains. But I think we’re noticeably above the optimum at the moment.[3]
When I first drafted this article 6 months ago I guessed <5%. I think it’s increased since then; it might still be <5% but I’d feel safer saying <10%, which I think is still too low.
This is a lot of the motivation for the exercise of asking “what do we want the world to look like in 10 years?”; especially if one excludes dimensions that relate to the future success of the EA movement, it’s prompting for more Phase 1.5 thinking.
I’ve written this in the first person, but as is often the case my views are informed significantly by conversations with others, and many people have directly or indirectly contributed to this. I want to especially thank Anna Salamon and Nick Beckstead for helpful discussions; and Raymond Douglas for help in editing.
Longtermist EA needs more Phase 2 work
I think there is an issue with the community-wide allocation of effort. A very large proportion of our effort goes into preparation work, setting the community up for future successes; and very little goes into external-focused actions which would be good even if the community disappeared. I’ll talk about why I think this is a problem (even though I love preparation work), and what types of things I hope to see more of.
Phase 1 and Phase 2
A general strategy for doing good things:
Phase 1: acquire resources and work out what to do
Phase 2: spend down the resources to do things
Note that Phase 2 is the bit where things actually happen. Phase 1 is a necessary step, but on its own it has no impact: it is just preparation for Phase 2.
To understand if something is Phase 2 for the longtermist EA community, we could ask “if the entire community disappeared, would the effects still be good for the world?”. For things which are about acquiring resources — raising money, recruiting people, or gaining influence — the answer is no. For much of the research that the community does, the path to impact is either by using the research to gain more influence, or having the research inform future longtermist EA work — so the answer is again no. However, writing an AI alignment textbook would be useful to the world even absent our communities, so would be Phase 2. (Some activities live in a grey area —for example, increasing scope sensitivity or concern for existential risk across broad parts of society.)
It makes sense to frontload our Phase 1 activities, but we do want to also do Phase 2 in parallel for several reasons:
Doing enough Phase 2 work helps to ground our Phase 1 work by ensuring that it’s targeted at making the Phase 2 stuff go well
Moreover we can benefit from the better feedback loops Phase 2 usually has
We can’t pivot (orgs, careers) instantly between different activities
A certain amount of Phase 2 helps bring in people who are attracted by demonstrated wins
We don’t know when the deadline for crucial work is (and some of the best opportunities may only be available early) so we want a portfolio across time
So the picture should look something like this:
I’m worried that it does not. We aren’t actually doing much Phase 2 work, and we also aren’t spending that much time thinking about what Phase 2 work to be doing. (Although I think we’re slowly improving on these dimensions.)
Problem: We are not doing Phase 2 Work
When we look at the current longtermist portfolio, there’s very little Phase 2 work[1]. A large majority of our effort is going into acquiring more resources (e.g. campus outreach, or writing books), or into working out what the-type-of-people-who-listen-to-us should do (e.g. global priorities research).
This is what we could call an inaction trap. As a community we’re preparing but not acting. (This is a relative of the meta-trap, but we have a whole bunch of object-level work e.g. on AI.)
How does AI alignment work fit in?
Almost all AI alignment research is Phase 1 — on reasonable timescales it’s aiming to produce insights about what future alignment researchers should investigate (or to gain influence for the researchers), rather than producing things that would leave the world in a better position even if the community walked away.
But if AI alignment is the crucial work we need to be doing, and it’s almost all Phase 1, could this undermine the claim that we should increase our focus on Phase 2 work?
I think not, for two reasons:
Lots of people are not well suited to AI alignment work, but there are lots of things that they could productively be working on, even if AI is the major determinant of the future (see below)
Even within AI alignment, I think an increased focus on “how does this end up helping?” could make Phase 1 work more grounded and less likely to accidentally be useless
Problem: We don’t really know what Phase 2 work to do
This may be a surprising statement. Culturally, EA encourages a lot of attention on what actions are good to take, and individuals talk about this all the time. But I think a large majority of the discussion is about relatively marginal actions — what job should an individual take; what project should an org start. And these discussions often relate mostly to Phase 1 goals, e.g. How can we get more people involved? Which people? What questions do we need to understand better? Which path will be better for learning, or for career capital?
It’s still relatively rare to have discussions which directly assess different types of Phase 2 work that we could embark on (either today or later). And while there is a lot of research which has some bearing on assessing Phase 2 work, a large majority of that research is trying to be foundational, or to provide helpful background information.
(I do think this has improved somewhat in recent times. I especially liked this post on concrete biosecurity projects. And the Future Fund project list and project ideas competition contain a fair number of sketch ideas for Phase 2 work.)
Nonetheless, working out what to actually do is perhaps the central question of longtermism. I think what we could call Phase 1.5 work — developing concrete plans for Phase 2 work and debating their merits — deserves a good fraction of our top research talent. My sense is that we’re still significantly undershooting on this.[2]
Engaging in this will be hard, and we’ll make lots of mistakes. I certainly see the appeal of keeping to the foundational work where you don’t need to stick your neck out: it seems more robust to gradually build the edifice of knowledge we can have high confidence in, or grow the community of people trying to answer these questions. But I think that we’ll get to better answers faster if we keep on making serious attempts to actually answer the question.
Towards virtuous cycles?
Since Phase 1.5 and Phase 2 work are complements, when we’re underinvested in both the marginal analsysis can suggest that neither is that worthwhile — why implement ideas that suck? or why invest in coming up with better ideas if nobody will implement them? But as we get more analysis of what Phase 2 work is needed, it should be easier for people to actually try these ideas out. And as we get people diving into implementation, it should get more people thinking more carefully about what Phase 2 work is actually helpful.
Plus, hopefully the Phase 2 work will actually just make the world better, which is kind of the whole thing we’re trying to do. And better yet, the more we do it, the more we can honestly convey to people that this is the thing we care about and are actually doing.
To be clear, I still think that Phase 1 activity is great. I think that it’s correct that longtermist EA has made it a major focus for now — far more than it would attract in many domains. But I think we’re noticeably above the optimum at the moment.[3]
When I first drafted this article 6 months ago I guessed <5%. I think it’s increased since then; it might still be <5% but I’d feel safer saying <10%, which I think is still too low.
This is a lot of the motivation for the exercise of asking “what do we want the world to look like in 10 years?”; especially if one excludes dimensions that relate to the future success of the EA movement, it’s prompting for more Phase 1.5 thinking.
I’ve written this in the first person, but as is often the case my views are informed significantly by conversations with others, and many people have directly or indirectly contributed to this. I want to especially thank Anna Salamon and Nick Beckstead for helpful discussions; and Raymond Douglas for help in editing.