The timing of labour aimed at reducing existential risk

Cross­posted from the Global Pri­ori­ties Project

Work to­wards re­duc­ing ex­is­ten­tial risk is likely to hap­pen over a timescale of decades. For many parts of this work, the benefits of that labour is greatly af­fected by when it hap­pens. This has a large effect when it comes to strate­gic think­ing about what to do now in or­der to best help the over­all ex­is­ten­tial risk re­duc­tion effort. I look at the effects of near­sight­ed­ness, course set­ting, self-im­prove­ment, growth, and se­rial depth, show­ing that there are com­pet­ing con­sid­er­a­tions which make some parts of labour par­tic­u­larly valuable ear­lier, while oth­ers are more valuable later on. We can thus im­prove our over­all efforts by en­courag­ing more meta-level work on course set­ting, self-im­prove­ment, and growth over the next decade, with more of a fo­cus on the ob­ject-level re­search on spe­cific risks to come in decades be­yond that.


Sup­pose some­one con­sid­ers AI to be the largest source of ex­is­ten­tial risk, and so spends a decade work­ing on ap­proaches to make self-im­prov­ing AI safer. It might later be­come clear that AI was not the most crit­i­cal area to worry about, or that this part of AI was not the most crit­i­cal part, or that this work was go­ing to get done any­way by main­stream AI re­search, or that work­ing on policy to reg­u­late re­search on AI was more im­por­tant than work­ing on AI. In any of these cases she wasted some of the value of her work by do­ing it now. She couldn’t be faulted for lack of om­ni­science, but she could be faulted for mak­ing her­self un­nec­es­sar­ily at the mercy of bad luck. She could have achieved more by do­ing her work later, when she had a bet­ter idea of what was the most im­por­tant thing to do.

We are near­sighted with re­spect to time. The fur­ther away in time some­thing is, the harder it is to per­ceive its shape: its form, its like­li­hood, the best ways to get pur­chase on it. This means that work done now on avoid­ing threats in the far fu­ture can be con­sid­er­ably less valuable than the same amount of work done later on. The ex­tra in­for­ma­tion we have when the threat is up close lets us more ac­cu­rately tai­lor our efforts to over­come it.

Other things be­ing equal, this sug­gests that a given unit of labour di­rected at re­duc­ing ex­is­ten­tial risk is worth more the later in time it comes.

Course set­ting, self-im­prove­ment & growth

As it hap­pens, other things are not equal. There are at least three ma­jor effects which can make ear­lier labour mat­ter more.

The first of these is if it helps to change course. If we are mov­ing steadily in the wrong di­rec­tion, we would do well to change our course, and this has a larger benefit the ear­lier we do so. For ex­am­ple, per­haps effec­tive al­tru­ists are build­ing up large re­sources in terms of spe­cial­ist labour di­rected at com­bat­ting a par­tic­u­lar ex­is­ten­tial risk, when they should be fo­cus­ing on more gen­eral pur­pose labour. Switch­ing to the su­pe­rior course sooner mat­ters more, so efforts to de­ter­mine the bet­ter course and to switch onto it mat­ter more the ear­lier they hap­pen.

The sec­ond is if labour can be used for self-im­prove­ment. For ex­am­ple, if you are go­ing to work to get a uni­ver­sity de­gree, it makes sense to do this ear­lier in your ca­reer rather than later as there is more time to be us­ing the ad­di­tional skills. Ed­u­ca­tion and train­ing, both for­mal and in­for­mal, are ma­jor ex­am­ples of self-im­prove­ment. Bet­ter time man­age­ment is an­other, and so is gain­ing poli­ti­cal or other in­fluence. How­ever this cat­e­gory only in­cludes things that cre­ate a last­ing im­prove­ment to your ca­pac­i­ties and that re­quire only a small up­keep. We can also think of self-im­prove­ment for an or­gani­sa­tion. If there is benefit to be had from im­proved or­gani­sa­tional effi­ciency, it is gen­er­ally bet­ter to get this sooner. A par­tic­u­larly im­por­tant form is low­er­ing the risk of the or­gani­sa­tion or move­ment col­laps­ing, or cut­ting off its po­ten­tial to grow.

The third is if the labour can be used to in­crease the amount of labour we have later. There are many ways this could hap­pen, sev­eral of which give ex­po­nen­tial growth. A sim­ple ex­am­ple is in­vest­ment. An early hour of labour could be used to gain funds which are then in­vested. If they are in­vested in a bank or the stock mar­ket, one could ex­pect a few per­cent real re­turn, let­ting you buy twice as much labour two or three decades later. If they are in­vested in rais­ing funds through other means (such as a fundrais­ing cam­paign) then you might be able to achieve a faster rate of growth, though prob­a­bly only over a limited num­ber of years un­til you are us­ing a sig­nifi­cant frac­tion of the easy op­por­tu­ni­ties.

A very im­por­tant ex­am­ple of growth is move­ment build­ing: en­courag­ing other peo­ple to ded­i­cate part of their own labour or re­sources to the com­mon cause, part of which will in­volve more move­ment build­ing. This will typ­i­cally have an ex­po­nen­tial im­prove­ment with the po­ten­tial for dou­ble digit per­centage growth un­til the most eas­ily reached or nat­u­rally in­ter­ested peo­ple have be­come part of the move­ment at which point it will start to plateau. An ex­tra hour of labour spent on move­ment build­ing early on, could very well pro­duce a hun­dred ex­tra hours of labour to be spent later. Note that there might be strong rea­sons not to build a move­ment as quickly as pos­si­ble: rapid growth could in­volve in­creas­ing the sig­nal to noise ra­tio in the move­ment, or chang­ing its core val­ues, or mak­ing it more likely to col­lapse, and this would have to be bal­anced against the benefits of growth sooner.

If the growth is ex­po­nen­tial for a while but will spend a lot of time stuck at a plateau, it might be bet­ter in the long term to think of it like self im­prove­ment. An or­gani­sa­tion might have been able to raise $10,000 of funds per year af­ter costs be­fore the im­prove­ment and then gains the power to raise $1,000,000 of funds per year af­ter­wards—only be­fore it hits the plateau does it have the ex­po­nen­tial struc­ture char­ac­ter­is­tic of growth.

Fi­nally, there is a mat­ter of se­rial depth. Some things re­quire a long suc­ces­sion of stages each of which must be com­plete be­fore the next be­gins. If you are build­ing a skyscraper, you will need to build the struc­ture for one story be­fore you can build the struc­ture for the next. You will there­fore want to al­low enough time for each of these stages to be com­pleted and might need to have some peo­ple start build­ing soon. Similarly, if a lot of novel and deep re­search needs to be done to avoid a risk, this might in­volve such a long pipeline that it could be worth start­ing it sooner to avoid the diminish­ing marginal re­turns that might come from labour ap­plied in par­allel. This effect is fairly com­mon in com­pu­ta­tion and labour dy­nam­ics (see The Myth­i­cal Man Month), but it is the fac­tor that I am least cer­tain of here. We ob­vi­ously shouldn’t hoard re­search labour (or other re­sources) un­til the last pos­si­ble year, and so there is a rea­son based on se­rial depth to do some of that re­search ear­lier. But it isn’t clear how many years ahead of time it needs to start get­ting al­lo­cated (ex­am­ples from the busi­ness liter­a­ture seem to have a time scale of a cou­ple of years at most) or how this com­pares to the down­sides of ac­ci­den­tally work­ing on the wrong prob­lem.


We have seen that near­sight­ed­ness can provide a rea­son to de­lay labour, while course set­ting, self-im­prove­ment, growth, and se­rial depth provide rea­sons to use labour sooner. In differ­ent cases, the rel­a­tive weights of these rea­sons will change. The cre­ation of gen­eral pur­pose re­sources such as poli­ti­cal in­fluence, ad­vo­cates for the cause, money, or earn­ing po­ten­tial, is es­pe­cially re­sis­tant to the near­sight­ed­ness prob­lem as they have more flex­i­bil­ity to be ap­plied to what­ever the most im­por­tant fi­nal steps hap­pen to be. Creat­ing gen­eral pur­pose re­sources, or do­ing course set­ting, self-im­prove­ment, or growth are thus com­par­a­tively bet­ter to do in the ear­lier times. Direct work on the cause is com­par­a­tively bet­ter to do later on (with a caveat about al­low­ing enough time to al­low for the re­quired se­rial depth).

In the case of ex­is­ten­tial risk, I think that many of the per­centage points of to­tal ex­is­ten­tial risk lie decades or more in the fu­ture. There is quite plau­si­bly more ex­is­ten­tial risk in the 22nd cen­tury than in the 21st. For AI risk, the re­cent FHI sur­vey of 174 ex­perts, the me­dian es­ti­mate for when there would be a 50% chance of reach­ing roughly hu­man level AI was 2040. For the sub­group of those who are part of the ‘Top 100’ re­searchers in AI, it was 2050. This gives some­thing like 25 to 35 years be­fore we think most of this risk will oc­cur. That is a long time and will pro­duce a large near­sight­ed­ness prob­lem for con­duct­ing spe­cific re­search now and a large po­ten­tial benefit for course set­ting, self-im­prove­ment, and growth. Given a port­fo­lio of labour to re­duce risk over that time, it is par­tic­u­larly im­por­tant to think about mov­ing types of labour to­wards the times where they have a com­par­a­tive ad­van­tage. If we are try­ing to con­vince oth­ers to help use their ca­reers to re­duce this risk, the best ca­reer ad­vice might change over the com­ing decades from help with move­ment build­ing or course set­ting, to ac­cu­mu­lat­ing more flex­ible re­sources, to do­ing spe­cial­ist tech­ni­cal work.

The tem­po­ral lo­ca­tion of a unit of labour can change its value by a great deal. It is quite plau­si­ble that due to near­sight­ed­ness, do­ing spe­cific re­search now could have less than a tenth the ex­pected value of do­ing it later, since it could so eas­ily be on the wrong risk, or the wrong way of ad­dress­ing the risk, or would have been done any­way, or could have been done more eas­ily us­ing tools peo­ple later build etc. It is also quite plau­si­ble that us­ing labour to pro­duce growth now, or to point us in a bet­ter di­rec­tion, could pro­duce ten times as much value. It is thus pivotal to think care­fully about when we want to have differ­ent kinds of labour.

I think that this over­all pic­ture is right and im­por­tant. How­ever, I should add some caveats. We might need to do some spe­cial­ist re­search early on in or­der to gain in­for­ma­tion about whether the risk is cred­ible or which parts to fo­cus on, to bet­ter help us with course set­ting. Or we might need to do re­search early in or­der to give re­search on risk re­duc­tion enough aca­demic cred­i­bil­ity to at­tract a wealth of main­stream aca­demic at­ten­tion, thereby achiev­ing vast growth in terms of the labour that will be spent on the re­search in the fu­ture. Some early ob­ject level re­search will also help with early fundrais­ing and move­ment build­ing—if things re­main too ab­stract for a long time, it would be ex­tremely difficult to main­tain a move­ment. But in these ex­am­ples, the over­all pic­ture is the same. If we want to do early ob­ject-level re­search, it is be­cause of its in­stru­men­tal effects on course set­ting, self-im­prove­ment, and growth.

The writ­ing of this doc­u­ment and the thought that pre­ceded it are an ex­am­ple of course set­ting: try­ing to sig­nifi­cantly im­prove the value of the long-term effort in ex­is­ten­tial risk re­duc­tion by chang­ing the di­rec­tion we head in. I think there are con­sid­er­able gains here and as with other course set­ting work, it is typ­i­cally good to do it sooner. I’ve tried to out­line the ma­jor sys­tem­atic effects that make the value of our labour vary greatly with time, and to pre­sent them qual­i­ta­tively. But per­haps there is a ma­jor effect I’ve missed, or per­haps some big gains by us­ing quan­ti­ta­tive mod­els. I think that more re­search on this would be very valuable.