The timing of labour aimed at reducing existential risk
Crossposted from the Global Priorities Project
Work towards reducing existential risk is likely to happen over a timescale of decades. For many parts of this work, the benefits of that labour is greatly affected by when it happens. This has a large effect when it comes to strategic thinking about what to do now in order to best help the overall existential risk reduction effort. I look at the effects of nearsightedness, course setting, self-improvement, growth, and serial depth, showing that there are competing considerations which make some parts of labour particularly valuable earlier, while others are more valuable later on. We can thus improve our overall efforts by encouraging more meta-level work on course setting, self-improvement, and growth over the next decade, with more of a focus on the object-level research on specific risks to come in decades beyond that.
Nearsightedness
Suppose someone considers AI to be the largest source of existential risk, and so spends a decade working on approaches to make self-improving AI safer. It might later become clear that AI was not the most critical area to worry about, or that this part of AI was not the most critical part, or that this work was going to get done anyway by mainstream AI research, or that working on policy to regulate research on AI was more important than working on AI. In any of these cases she wasted some of the value of her work by doing it now. She couldn’t be faulted for lack of omniscience, but she could be faulted for making herself unnecessarily at the mercy of bad luck. She could have achieved more by doing her work later, when she had a better idea of what was the most important thing to do.We are nearsighted with respect to time. The further away in time something is, the harder it is to perceive its shape: its form, its likelihood, the best ways to get purchase on it. This means that work done now on avoiding threats in the far future can be considerably less valuable than the same amount of work done later on. The extra information we have when the threat is up close lets us more accurately tailor our efforts to overcome it.
Other things being equal, this suggests that a given unit of labour directed at reducing existential risk is worth more the later in time it comes.
Course setting, self-improvement & growth
As it happens, other things are not equal. There are at least three major effects which can make earlier labour matter more.The first of these is if it helps to change course. If we are moving steadily in the wrong direction, we would do well to change our course, and this has a larger benefit the earlier we do so. For example, perhaps effective altruists are building up large resources in terms of specialist labour directed at combatting a particular existential risk, when they should be focusing on more general purpose labour. Switching to the superior course sooner matters more, so efforts to determine the better course and to switch onto it matter more the earlier they happen.
The second is if labour can be used for self-improvement. For example, if you are going to work to get a university degree, it makes sense to do this earlier in your career rather than later as there is more time to be using the additional skills. Education and training, both formal and informal, are major examples of self-improvement. Better time management is another, and so is gaining political or other influence. However this category only includes things that create a lasting improvement to your capacities and that require only a small upkeep. We can also think of self-improvement for an organisation. If there is benefit to be had from improved organisational efficiency, it is generally better to get this sooner. A particularly important form is lowering the risk of the organisation or movement collapsing, or cutting off its potential to grow.
The third is if the labour can be used to increase the amount of labour we have later. There are many ways this could happen, several of which give exponential growth. A simple example is investment. An early hour of labour could be used to gain funds which are then invested. If they are invested in a bank or the stock market, one could expect a few percent real return, letting you buy twice as much labour two or three decades later. If they are invested in raising funds through other means (such as a fundraising campaign) then you might be able to achieve a faster rate of growth, though probably only over a limited number of years until you are using a significant fraction of the easy opportunities.
A very important example of growth is movement building: encouraging other people to dedicate part of their own labour or resources to the common cause, part of which will involve more movement building. This will typically have an exponential improvement with the potential for double digit percentage growth until the most easily reached or naturally interested people have become part of the movement at which point it will start to plateau. An extra hour of labour spent on movement building early on, could very well produce a hundred extra hours of labour to be spent later. Note that there might be strong reasons not to build a movement as quickly as possible: rapid growth could involve increasing the signal to noise ratio in the movement, or changing its core values, or making it more likely to collapse, and this would have to be balanced against the benefits of growth sooner.
If the growth is exponential for a while but will spend a lot of time stuck at a plateau, it might be better in the long term to think of it like self improvement. An organisation might have been able to raise $10,000 of funds per year after costs before the improvement and then gains the power to raise $1,000,000 of funds per year afterwards—only before it hits the plateau does it have the exponential structure characteristic of growth.
Finally, there is a matter of serial depth. Some things require a long succession of stages each of which must be complete before the next begins. If you are building a skyscraper, you will need to build the structure for one story before you can build the structure for the next. You will therefore want to allow enough time for each of these stages to be completed and might need to have some people start building soon. Similarly, if a lot of novel and deep research needs to be done to avoid a risk, this might involve such a long pipeline that it could be worth starting it sooner to avoid the diminishing marginal returns that might come from labour applied in parallel. This effect is fairly common in computation and labour dynamics (see The Mythical Man Month), but it is the factor that I am least certain of here. We obviously shouldn’t hoard research labour (or other resources) until the last possible year, and so there is a reason based on serial depth to do some of that research earlier. But it isn’t clear how many years ahead of time it needs to start getting allocated (examples from the business literature seem to have a time scale of a couple of years at most) or how this compares to the downsides of accidentally working on the wrong problem.
Consequences
We have seen that nearsightedness can provide a reason to delay labour, while course setting, self-improvement, growth, and serial depth provide reasons to use labour sooner. In different cases, the relative weights of these reasons will change. The creation of general purpose resources such as political influence, advocates for the cause, money, or earning potential, is especially resistant to the nearsightedness problem as they have more flexibility to be applied to whatever the most important final steps happen to be. Creating general purpose resources, or doing course setting, self-improvement, or growth are thus comparatively better to do in the earlier times. Direct work on the cause is comparatively better to do later on (with a caveat about allowing enough time to allow for the required serial depth).In the case of existential risk, I think that many of the percentage points of total existential risk lie decades or more in the future. There is quite plausibly more existential risk in the 22nd century than in the 21st. For AI risk, the recent FHI survey of 174 experts, the median estimate for when there would be a 50% chance of reaching roughly human level AI was 2040. For the subgroup of those who are part of the ‘Top 100’ researchers in AI, it was 2050. This gives something like 25 to 35 years before we think most of this risk will occur. That is a long time and will produce a large nearsightedness problem for conducting specific research now and a large potential benefit for course setting, self-improvement, and growth. Given a portfolio of labour to reduce risk over that time, it is particularly important to think about moving types of labour towards the times where they have a comparative advantage. If we are trying to convince others to help use their careers to reduce this risk, the best career advice might change over the coming decades from help with movement building or course setting, to accumulating more flexible resources, to doing specialist technical work.
The temporal location of a unit of labour can change its value by a great deal. It is quite plausible that due to nearsightedness, doing specific research now could have less than a tenth the expected value of doing it later, since it could so easily be on the wrong risk, or the wrong way of addressing the risk, or would have been done anyway, or could have been done more easily using tools people later build etc. It is also quite plausible that using labour to produce growth now, or to point us in a better direction, could produce ten times as much value. It is thus pivotal to think carefully about when we want to have different kinds of labour.
I think that this overall picture is right and important. However, I should add some caveats. We might need to do some specialist research early on in order to gain information about whether the risk is credible or which parts to focus on, to better help us with course setting. Or we might need to do research early in order to give research on risk reduction enough academic credibility to attract a wealth of mainstream academic attention, thereby achieving vast growth in terms of the labour that will be spent on the research in the future. Some early object level research will also help with early fundraising and movement building—if things remain too abstract for a long time, it would be extremely difficult to maintain a movement. But in these examples, the overall picture is the same. If we want to do early object-level research, it is because of its instrumental effects on course setting, self-improvement, and growth.
The writing of this document and the thought that preceded it are an example of course setting: trying to significantly improve the value of the long-term effort in existential risk reduction by changing the direction we head in. I think there are considerable gains here and as with other course setting work, it is typically good to do it sooner. I’ve tried to outline the major systematic effects that make the value of our labour vary greatly with time, and to present them qualitatively. But perhaps there is a major effect I’ve missed, or perhaps some big gains by using quantitative models. I think that more research on this would be very valuable.
- Collection of good 2012-2017 EA forum posts by 10 Jul 2020 16:35 UTC; 202 points) (
- 10 Nov 2019 16:23 UTC; 4 points) 's comment on Assumptions about the far future and cause priority by (
These are useful considerations, Toby. :)
Other reasons to do (at least some) direct work sooner:
1. In order to build a movement, you have to have something to build the movement around. If you do actually interesting research, you can attract people who are interested in that research. If you just talk about doing research, you attract people who like to talk about research. I really think there’s something to be said to just tackling something that looks important, trying to do a good job, and seeing who joins you and where it ends up, rather than thinking meta-meta about how best to go about it for a long time. That said, I also see high value in thinking hard for a long time, but I contend that you need both together, to bounce ideas off each other, rather than only sitting in an armchair for 10 years. This ties into the next point...
2. Doing concrete research can teach you things no amount of abstract theorizing would have. It’s like the philosophy behind agile development: Rather than making a grand plan, try some stuff, see how it works, get acquainted with the situation on the ground, and then figure out where to go next. I think it’s useful to get a little bit of deep knowledge of a topic in addition to more shallow knowledge, in order to calibrate your picture of things. It’s similar to the reason philosophy courses have you actually read Plato and Hobbes rather than just reading other people talk about them. You get a special kind of understanding by seeing things up close.
3. Lots of things could happen between now and later. Your movement might disband. You might lose interest. You might decide you want to spend time on something non-altruism related. And so on. It can be good to take advantage of what you have when you have it.
Finally, a last point that can go either way depending on the circumstances is
4. Comparative advantage: If you’re an awesome AI researcher, you should probably do direct AI work, not movement-building, and the opposite if you’re an awesome evangelist.
Sorry, I see you already mentioned a few of these points in the piece.
Yes, I think part of the reason to get hands-on is instrumental, but I think the direct value of doing so is relevant too. Eventually someone has to do the work, and while I do think the value of an EA’s labor is often higher than that of other smart people, I don’t think it’s vastly higher. At some point, somebody needs to do the work. I think it’s often good to try some stuff now, see how the situation looks, and then keep working on the more promising areas. That investigation work is not wasted if it’s shared publicly. As long as you don’t get mired too long in a highly narrow focus, you should be ok.
I recently shared a link to this piece[1] in the EA Newsletter (in the “Timeless Classics” section). The post had come up in some conversations I was having about how to think about AI timelines, and I also happened to come across the newer Twitter thread about it.
Cross-posting my brief summary here in case someone is interested (or wants to point out how it might be wrong:)).
Although I went with a non-Forum link.
“25 to 35 years before we think most of this risk will occur. That is a long time”
Is it really? Another reason for doing direct work sooner is that if the amount of AI safety work being performed is growing, then by working sooner, you will be able to do a larger fraction of the total.
E.g. if you think that AI risks might arrive in 10 or 50 years, and you think that a lot of AI safety research is going to happen after 20 years, then your relative contribution may be larger if AI arrives in 10 years, making it good to research soon.
“by working sooner, you will be able to do a larger fraction of the total.”
You mean because of the diminishing returns to this work? If that’s what you mean, I’d respond that by grabbing the low-hanging fruit you leave less low-hanging fruit to others. This makes their contributions less effective. These effects should cancel out.
A different case would be if the amount of AI safety work being done increases as a function of the work that has already been done (rather than as a function of time or of general AI progress). Then you would expect a logarithmic/exponential increase of AI safety work over time. In this case, grabbing the low-hanging fruit sooner would shift progress in the field forward more than if you contributed later, as you said.
I don’t think this is the case for AI safety research though, but it could be the case for a technology like cultured meat for example.
I didn’t quite understand you’re example though, so this might be a misunderstanding. I guess what you mean is that for a risk where we might be in a dangerous phase for longer (e.g. syn bio), the safety work should be done sooner, but it may mostly get done after the risk arrives?
That would be true. But the point would remain that work done decades before the risk appears has lower value.
If you thought that AI could arrive in 10 years and the safety work would only get done in 20 years that’s a reason to do the work more quickly of course. But I don’t think that’s actually what you mean?
[…] Ord, The timing of labour aimed at reducing existential risk. […]