My review of Tom Chivers’ review of Toby Ord’s The Precipice
I thought The Precipice was a fantastic book; I’d highly recommend it. And I agree with a lot about Chivers’ review of it for The Spectator. I think Chivers captures a lot of the important points and nuances of the book, often with impressive brevity and accessibility for a general audience. (I’ve also heard good things about Chivers’ own book.)
But there are three parts of Chivers’ review that seem to me to like they’re somewhat un-nuanced, or overstate/oversimplify the case for certain things, or could come across as overly alarmist.
I think Ord is very careful to avoid such pitfalls in The Precipice, and I’d guess that falling into such pitfalls is an easy and common way for existential risk related outreach efforts to have less positive impacts than they otherwise could, or perhaps even backfire. I understand that a review gives on far less space to work with than a book, so I don’t expect anywhere near the level of nuance and detail. But I think that overconfident or overdramatic statements of uncertain matters (for example) can still be avoided.
I’ll now quote and comment on the specific parts of Chivers’ review that led to that view of mine.
An alleged nuclear close call
Firstly, in my view, there are three flaws with the opening passage of the review:
Humanity has come startlingly close to destroying itself in the 75 or so years in which it has had the technological power to do so. Some of the stories are less well known than others. One, buried in Appendix D of Toby Ord’s splendid The Precipice, I had not heard, despite having written a book on a similar topic myself. During the Cuban Missile Crisis, a USAF captain in Okinawa received orders to launch nuclear missiles; he refused to do so, reasoning that the move to DEFCON 1, a war state, would have arrived first.
Not only that: he sent two men down the corridor to the next launch control centre with orders to shoot the lieutenant in charge there if he moved to launch without confirmation. If he had not, I probably would not be writing this — unless with a charred stick on a rock.
First issue: Toby Ord makes it clear that “the incident I shall describe has been disputed, so we cannot yet be sure whether it occurred.” Ord notes that “others who claimed to have been present in the Okinawa missile bases at the time” have since challenged this account, although there is also “some circumstantial evidence” supporting the account. Ultimately, Ord concludes “In my view this alleged incident should be taken seriously, but until there is further confirmation, no one should rely on it in their thinking about close calls.” I therefore think Chivers should’ve made it clear that this is a disputed story.
Second issue: My impression from the book is that, even in the account of the person claiming this story is true, the two men sent down the corridor did not turn out to be necessary to avert the launch. (That said, the book isn’t explicit on the point, so I’m unsure.) Ord writes that Bassett “telephoned the Missile Operations Centre, asking the person who radioed the order to either give the DEFCON 1 order or issue a stand-down order. A stand-down order was quickly given and the danger was over.” That is the end of Ord’s retelling of the account itself (rather than discussion of the evidence for or against it).
Third issue: I think it’s true that, if a nuclear launch had occurred in that scenario, a large-scale nuclear war probably would’ve occurred (though it’s not guaranteed, and it’s hard to say). And if that happened, it seems technically true that Chivers probably would’ve have written this review. But I think that’s primarily because history would’ve just unfolded very, very difficulty. Chivers seems to imply this is because civilization probably would’ve collapsed, and done so so severely than even technologies such as pencils would be lost and that they’d still be lost all these decades on (such that, if he was writing this review, he’d do so with “a charred stick on a rock”).
This may seem like me taking a bit of throwaway rhetoric or hyperbole too seriously, and that may be so. But I think among the key takeaways of the book were vast uncertainties around whether certain events would actually lead to major catastrophes (e.g., would a launch lead to a full-scale nuclear war?), whether catastrophes would lead to civilizational collapse (e.g., how severe and long-lasting would the nuclear winter be, and how well would we adapt?), how severe collapses would be (e.g., to pre-industrial or pre-agricultural levels?), and how long-lasting collapses would be (from memory, Ord seems to think recovery is in fact fairly likely).
So I worry that a sentence like that one makes the book sound somewhat alarmist, doomsaying, and naive/simplistic, whereas in reality it seems to me quite nuanced and open about the arguments for why existential risk from certain sources may be “quite low”—and yet still extremely worth attending to, given the stakes.
To be fair, or to make things slightly stranger, Chivers does later say:
Perhaps surprisingly, [Ord] doesn’t think that nuclear war would have been an existential catastrophe. It might have been — a nuclear winter could have led to sufficiently dreadful collapse in agriculture to kill everyone — but it seems unlikely, given our understanding of physics and biology.
(Also, as an incredibly minor point, I think the relevant appendix was Appendix C rather than D. But maybe that was different in different editions or in an early version Chivers saw.)
“Numerically small”
Secondly, Chivers writes:
[Ord] points out that although the difference between a disaster that kills 99 per cent of us and one that kills 100 per cent would be numerically small, the outcome of the latter scenario would be vastly worse, because it shuts down humanity’s future.
I don’t recall Ord ever saying something like that the death of 1 percent of the population would be “numerically small”. Ord very repeatedly emphasises and reminds the reader that something really can count as deeply or even unprecedently awful, and well worth expending resources to avoid, even if it’s not an existential catastrophe. This seems to me a valuable thing to do, otherwise the x-risk community could easily be seen as coldly dismissive of any sub-existential catastrophes. (Plus, such catastrophes really are very bad and well worth expending resources to avoid—this is something I would’ve said anyway, but seems especially pertinent in the current pandemic.)
I think saying “the difference between a disaster that kills 99 per cent of us and one that kills 100 per cent would be numerically small” cuts against that goal, and again could paint Ord as more simplistic or extremist than he really is.
“Blowing ourselves up”
Finally (for the purpose of my critiques), Chivers writes:
We could live for a billion years on this planet, or billions more on millions of other planets, if we manage to avoid blowing ourselves up in the next century or so.
To me, “avoid blowing ourselves up” again sounds quite informal or naive or something like that. It doesn’t leave me with the impression that the book will be a rigorous and nuanced treatment of the topic. Plus, Ord isn’t primarily concerned with us “blowing ourselves up”—the specific risks he sees as the largest are unaligned AI, engineered pandemics, and “unforeseen anthropogenic risk”.
And even in the case of nuclear war, Ord is quite clear that it’s the nuclear winter that’s the largest source of existentialrisk, rather than the explosions themselves (though of course the explosions are necessary for causing such a winter). In fact, Ord writes “While one often hears the claim that we have enough nuclear weapons to destroy the world may times over, this is loose talk.” (And he explains why this is loose talk.)
So again, this seems like a case where Ord actively separates his clear-headed analysis of the risks from various naive, simplistic, alarmist ideas that are somewhat common among some segments of the public, but where Chivers’ review makes it sound (at least to me) like the book will match those sorts of ideas.
All that said, I should again note that I thought the review did a lot right. In fact, I have no quibbles at all with anything from that last quote onwards.
This was an excellent meta-review! Thanks for sharing it.
I agree that these little slips of language are important; they can easily compound into very stubborn memes. (I don’t know whether the first person to propose a paperclip AI regrets it, but picking a different example seems like it could have had a meaningful impact on the field’s progress.)
My review of Tom Chivers’ review of Toby Ord’s The Precipice
I thought The Precipice was a fantastic book; I’d highly recommend it. And I agree with a lot about Chivers’ review of it for The Spectator. I think Chivers captures a lot of the important points and nuances of the book, often with impressive brevity and accessibility for a general audience. (I’ve also heard good things about Chivers’ own book.)
But there are three parts of Chivers’ review that seem to me to like they’re somewhat un-nuanced, or overstate/oversimplify the case for certain things, or could come across as overly alarmist.
I think Ord is very careful to avoid such pitfalls in The Precipice, and I’d guess that falling into such pitfalls is an easy and common way for existential risk related outreach efforts to have less positive impacts than they otherwise could, or perhaps even backfire. I understand that a review gives on far less space to work with than a book, so I don’t expect anywhere near the level of nuance and detail. But I think that overconfident or overdramatic statements of uncertain matters (for example) can still be avoided.
I’ll now quote and comment on the specific parts of Chivers’ review that led to that view of mine.
An alleged nuclear close call
Firstly, in my view, there are three flaws with the opening passage of the review:
First issue: Toby Ord makes it clear that “the incident I shall describe has been disputed, so we cannot yet be sure whether it occurred.” Ord notes that “others who claimed to have been present in the Okinawa missile bases at the time” have since challenged this account, although there is also “some circumstantial evidence” supporting the account. Ultimately, Ord concludes “In my view this alleged incident should be taken seriously, but until there is further confirmation, no one should rely on it in their thinking about close calls.” I therefore think Chivers should’ve made it clear that this is a disputed story.
Second issue: My impression from the book is that, even in the account of the person claiming this story is true, the two men sent down the corridor did not turn out to be necessary to avert the launch. (That said, the book isn’t explicit on the point, so I’m unsure.) Ord writes that Bassett “telephoned the Missile Operations Centre, asking the person who radioed the order to either give the DEFCON 1 order or issue a stand-down order. A stand-down order was quickly given and the danger was over.” That is the end of Ord’s retelling of the account itself (rather than discussion of the evidence for or against it).
Third issue: I think it’s true that, if a nuclear launch had occurred in that scenario, a large-scale nuclear war probably would’ve occurred (though it’s not guaranteed, and it’s hard to say). And if that happened, it seems technically true that Chivers probably would’ve have written this review. But I think that’s primarily because history would’ve just unfolded very, very difficulty. Chivers seems to imply this is because civilization probably would’ve collapsed, and done so so severely than even technologies such as pencils would be lost and that they’d still be lost all these decades on (such that, if he was writing this review, he’d do so with “a charred stick on a rock”).
This may seem like me taking a bit of throwaway rhetoric or hyperbole too seriously, and that may be so. But I think among the key takeaways of the book were vast uncertainties around whether certain events would actually lead to major catastrophes (e.g., would a launch lead to a full-scale nuclear war?), whether catastrophes would lead to civilizational collapse (e.g., how severe and long-lasting would the nuclear winter be, and how well would we adapt?), how severe collapses would be (e.g., to pre-industrial or pre-agricultural levels?), and how long-lasting collapses would be (from memory, Ord seems to think recovery is in fact fairly likely).
So I worry that a sentence like that one makes the book sound somewhat alarmist, doomsaying, and naive/simplistic, whereas in reality it seems to me quite nuanced and open about the arguments for why existential risk from certain sources may be “quite low”—and yet still extremely worth attending to, given the stakes.
To be fair, or to make things slightly stranger, Chivers does later say:
(Also, as an incredibly minor point, I think the relevant appendix was Appendix C rather than D. But maybe that was different in different editions or in an early version Chivers saw.)
“Numerically small”
Secondly, Chivers writes:
I don’t recall Ord ever saying something like that the death of 1 percent of the population would be “numerically small”. Ord very repeatedly emphasises and reminds the reader that something really can count as deeply or even unprecedently awful, and well worth expending resources to avoid, even if it’s not an existential catastrophe. This seems to me a valuable thing to do, otherwise the x-risk community could easily be seen as coldly dismissive of any sub-existential catastrophes. (Plus, such catastrophes really are very bad and well worth expending resources to avoid—this is something I would’ve said anyway, but seems especially pertinent in the current pandemic.)
I think saying “the difference between a disaster that kills 99 per cent of us and one that kills 100 per cent would be numerically small” cuts against that goal, and again could paint Ord as more simplistic or extremist than he really is.
“Blowing ourselves up”
Finally (for the purpose of my critiques), Chivers writes:
To me, “avoid blowing ourselves up” again sounds quite informal or naive or something like that. It doesn’t leave me with the impression that the book will be a rigorous and nuanced treatment of the topic. Plus, Ord isn’t primarily concerned with us “blowing ourselves up”—the specific risks he sees as the largest are unaligned AI, engineered pandemics, and “unforeseen anthropogenic risk”.
And even in the case of nuclear war, Ord is quite clear that it’s the nuclear winter that’s the largest source of existential risk, rather than the explosions themselves (though of course the explosions are necessary for causing such a winter). In fact, Ord writes “While one often hears the claim that we have enough nuclear weapons to destroy the world may times over, this is loose talk.” (And he explains why this is loose talk.)
So again, this seems like a case where Ord actively separates his clear-headed analysis of the risks from various naive, simplistic, alarmist ideas that are somewhat common among some segments of the public, but where Chivers’ review makes it sound (at least to me) like the book will match those sorts of ideas.
All that said, I should again note that I thought the review did a lot right. In fact, I have no quibbles at all with anything from that last quote onwards.
This was an excellent meta-review! Thanks for sharing it.
I agree that these little slips of language are important; they can easily compound into very stubborn memes. (I don’t know whether the first person to propose a paperclip AI regrets it, but picking a different example seems like it could have had a meaningful impact on the field’s progress.)
Agreed.
These seem to often be examples of hedge drift, and their potential consequences seem like examples of memetic downside risks.