‘The Precipice’ Book Review

Epistemic sta­tus: I re­cently read Toby Ord’s “The Precipice” as part of a lo­cal EA book club. I’m very con­vinced by EA ar­gu­ments on global poverty and an­i­mal welfare, but less so on the long-term fu­ture. I de­cided to write my thoughts down into a book re­view style post. I’d like to im­prove my blog writ­ing, pos­si­bly as a way to do EA out­reach. I would ap­pre­ci­ate any feed­back on my writ­ing, as well as the points I’ve made.


Toby Ord rates the prob­a­bil­ity of hu­man ex­tinc­tion in the next cen­try as 1 in 6. This prob­a­bil­ity is as­sum­ing rad­i­cal col­lec­tive ac­tion by hu­man­ity to avoid ex­tinc­tion, with­out such ac­tions, Ord places the risk at 1 in 3. Th­ese num­bers are not picked out of thin air; half The Precipice con­sists of ap­pen­dices, de­tailing anal­y­sis by some of the lead­ing ex­perts in the field of ex­is­ten­tial risk. Such num­bers are shock­ing, and es­pe­cially so when stated in such a calcu­lated man­ner by a mem­ber of a main­stream aca­demic es­tab­lish­ment (Ord is a se­nior re­search fel­low at Oxford Univer­sity’s Fu­ture of Hu­man­ity In­sti­tute). Ord’s man­ner and back­ground is a big change from more rad­i­cal groups who talk of hu­man ex­tinc­tion – be that Ex­tinc­tion Re­bel­lion ac­tivists, 1960s nu­clear disar­ma­ment protesters or even cult lead­ers pros­elytis­ing an up­com­ing apoc­a­lypse.

While Ord’s num­bers may be rad­i­cal, his solu­tions are not. He warns speci­fi­cally against things that those who worry about ex­is­ten­tial risk should not do- “don’t act unilat­er­ally”, “don’t act with­out in­tegrity”, “don’t go on about it”. Prob­a­bly his most con­tro­ver­sial policy recom­men­da­tion is his en­dorse­ment of a form of world gov­ern­ment to co­or­di­nate the ac­tions needed for hu­man­ity to avoid ex­tinc­tion. Such a recom­men­da­tion will not be pop­u­lar in a cur­rent poli­ti­cal cli­mate of na­tion­al­ist pro­tec­tion­ism and in­creas­ing scep­ti­cism of in­ter­na­tional in­sti­tu­tions.

Ord’s chief rea­son for wor­ry­ing about hu­man ex­tinc­tion is that it would wipe out our ‘fu­ture po­ten­tial’. This ar­gu­ment will res­onate most deeply with peo­ple who see hu­man ex­is­tence as a pos­i­tive, flour­ish­ing thing, that must be pre­served at all costs. But the end of hu­man­ity would also mean an end to a lot of suffer­ing- both the suffer­ing of hu­mans and the suffer­ing hu­mans in­flict on an­i­mals. For peo­ple who are most con­cerned with re­duc­ing suffer­ing, Ord may have to work to get them on board this vi­sion of a pos­i­tive ‘fu­ture po­ten­tial’ that must be pro­tected at all costs.

There is still much in Ord’s pre­dic­tions that should give such peo­ple cause con­cern- be­cause any of the risks Ord de­scribes, from a fu­ture planet made un­liv­able by cli­mate change, to a kil­ler en­g­ineered pan­demic- would bring about suffer­ing on an uni­mag­in­able scale. But for Ord this is not the main point. Ord goes to great lengths to em­pha­sise that his fo­cus is not on merely catas­trophic risks but ex­is­ten­tial risks- those that would com­pletely wipe out the fu­ture of hu­man­ity. For Ord, an event that would wipe out 100% of the hu­man pop­u­la­tion is much worse than one that would wipe out only 99% of hu­man­ity, be­cause the former would de­stroy our “fu­ture po­ten­tial”.

A par­tic­u­lar prob­lem Ord and the ‘fu­ture pro­tec­tion­ists’ face is that their vi­sion of a pos­i­tive fu­ture of hu­man­ity is poorly defined. This am­bi­guity to what a flour­ish­ing fu­ture hu­man­ity could look like is prob­a­bly an in­ten­tional fea­ture of Ord’s writ­ing, to avoid cre­at­ing a po­laris­ing or one-sided vi­sion of hu­man­ity’s po­ten­tial. In­deed, Ord writes of not want­ing to lock our cur­rent val­ues into the fu­ture, as we may have a bet­ter moral un­der­stand­ing in the fu­ture. While this open-minded ap­proach to the fu­ture is com­mend­able, it does make it hard for the reader to imag­ine what we’re fight­ing to save. How do we fight to pre­serve hu­man­ity’s po­ten­tial, if we can’t imag­ine what that po­ten­tial may look like? Without a shared vi­sion, it may be hard for Ord to redi­rect a reader’s at­ten­tion from the more emo­tion­ally salient prob­lems of the cur­rent day, and to cre­ate the sense of ur­gency that Ord ar­gues we need, if we are to avoid our own down­fall.

Another prob­lem in the idea of ‘pro­tect­ing hu­man­ity’s po­ten­tial’ is touched upon briefly, un­der a sec­tion ti­tled ‘Pop­u­la­tion Ethics’ The prob­lem can be illus­trated with the clas­sic sci-fi time trav­el­ling para­dox – the main char­ac­ter goes back in time, then in­ad­ver­tently in­terferes in his par­ents’ bud­ding ro­mance and pre­vents him­self from com­ing into ex­is­tence. A similar line of thought can be car­ried into the fu­ture from now- our ac­tions now may define who does and doesn’t come into ex­is­tence. How do we think about ethics re­gard­ing peo­ple who don’t even ex­ist yet?

One solu­tion to this prob­lem is to say that a ac­tion can only have a moral value if it af­fects some­one- and since fu­ture gen­er­a­tions don’t ex­ist (yet), we shouldn’t worry about the effects of our ac­tions on them. This is known as the ‘per­son af­fect­ing’ view­point on pop­u­la­tion ethics. Even if the reader has not ex­plic­itly formed her views in this way, she might find it hard for her to em­pathise with hu­mans that haven’t even been born yet, less so those who may or may not be born for thou­sands of years. Favour­ing the pre­sent in this way makes perfect sense when think­ing about money- hav­ing £100 to­day is more use­ful than hav­ing £100 in a years time, be­cause you can in­vest the money you have to­day, and earn in­ter­est on it. But Ord ar­gues that this kind of ‘tem­po­ral dis­count­ing’ should not be ap­plied to moral­ity, and that all lives are equally valuable no mat­ter where in time they stand. Get­ting a reader to over­come this sense of an­tipa­thy to peo­ple who do not yet ex­ist may be one of the biggest challenges Ord’s philos­o­phy faces. It’s an­other rea­son why a reader may pri­ori­tise im­prov­ing the well-be­ing of cur­rent be­ings over fight­ing against the pos­si­ble non-ex­is­tence of fu­ture gen­er­a­tions.

One of the biggest sur­prises of the book is the dis­crep­ancy be­tween nat­u­ral oc­cur­ring risks and hu­man caused risks. Ord rates nat­u­rally caused risks as hav­ing a 1 in 10 000 chance of caus­ing hu­man ex­tinc­tion in the next cen­tury, whilst hu­man-cre­ated risks as be­ing be­tween 1 in 3 and 1 in 6. Given this, you might be think­ing that Ord would ad­vise ap­ply­ing the brakes on hu­man tech­nolog­i­cal progress. But Ord is re­luc­tant to ad­vo­cate for the slow­ing of tech­nolog­i­cal progress, which seems in­cred­ibly risky when con­trasted with Ord’s own view on the risks of ad­vanced AI. Ord rates the risk as hu­man­ity be­ing wiped out by a fu­ture su­per­in­tel­li­gent AI as 1 in 10. Ord rightly points out that halt­ing or slow­ing hu­man tech­nolog­i­cal progress may be difficult, as it would re­quire al­most ev­ery­one in the world to re­frain from de­vel­op­ing tech­nol­ogy. But he has per­haps un­der-es­ti­mated here the strength of feel­ing in the re­cent pop­u­lar move­ment against tech gi­ants and so­cial me­dia com­pa­nies. This is not a lud­dite move­ment, and you don’t have to be a techno­phobe to think that ad­vo­cat­ing the slow­ing of tech­nolog­i­cal de­vel­op­ment could be wise.

But Ord writes that it may not be not be ad­vis­able, for it would in­volve us cur­tailing our fu­ture po­ten­tial. Here again, the slip­pery con­cept of ‘hu­man­i­ties’ po­ten­tial’ ap­pears. Hu­man­i­ties’ po­ten­tial, it seems, re­lies on tech­nolog­i­cal in­no­va­tion. And this is per­haps the cen­tral moral con­cern of the Precipice- the idea that hu­man­ity should reach its ‘full po­ten­tial’. Ord writes a clear out­line of the risks that we as a race face, and be­gins to sketch some of the steps we can take to pre­serve our fu­ture. But in­still­ing the ur­gency to do so may re­quire an­other type of writ­ing-that of sci­ence fic­tion, of more cre­ative vi­sion­ar­ies who are will­ing to paint in vivid de­tail a pic­ture of what a flour­ish­ing hu­man fu­ture could be.