The Precipice: Existential Risk and the Future of Humanity is a book by Toby Ord. It was published on 5 March 2020.
The book is an extended argument for the conclusion that reducing existential risk is the fundamental challenge of our time. The book’s title refers to a period of heightened risk which humanity must navigate safely to realize its long-term potential. Ord dates the beginning of this period to the Trinity test, when the first atomic bomb was detonated.
Connection to longtermism
The Precipice characterizes longtermism as an ethic “especially concerned with the impacts of our actions upon the longterm future”, and on which “our most important role may be how we shape—or fail to shape—that story.”[1] This characterization may suggest that the book’s central thesis restates the longtermist thesis. However, while the two are related, they are different. First, according to Ord the case for reducing existential risk does not presuppose longtermism, and can be made even if that view is rejected. As he writes, “[o]ne doesn’t have to approach existential risk from [a longtermist] direction” since “there is already a strong moral case just from the immediate effects.”[1] Second, while reducing existential risk is an obvious path to influencing the long-term future, there may be other ways of exerting such a lasting influence.
Further reading
Aird, Michael (2020) List of things I’ve written or may write that are relevant to The Precipice, Effective Altruism Forum, April 6.
Alexander, Scott (2020) Book Review: The Precipice, Slate Star Codex, April 2.
Ord, Toby (2020) The Precipice: Existential Risk and the Future of Humanity, London: Bloomsbury Publishing.
Wiblin, Robert, Arden Koehler & Keiran Harris (2020) Toby Ord on The Precipice and humanity’s potential futures, 80,000 Hours, March 7.
External links
The Precipice. Official website.
Related entries
existential risk | existential security | long-term future | Toby Ord | Trinity | What We Owe the Future
- ^
Ord, Toby (2020) The Precipice: Existential Risk and the Future of Humanity, London: Bloomsbury Publishing, p. 46.
I’m very much aligned with the version of utilitarianism that Bostrom and Ord generally put forth, but a question came up in a conversation regarding this philosophy and view of sustainability. As a thought experiment what would be consistent with this philosophy if we discover that a very clear way to minimize existential risk due to X requires a genocide of half or a significant subset of the population?
Hi Jose,
Bostrom and Ord do not put forth any version of utilitarianism. Bostrom isn’t even a consequentialist, let alone a utilitarian. Both authors take moral uncertainty seriously. (Ord defends a version of global consequentialism, but not in the context of arguing for prioritizing existential risk reduction.) Nor does concern for existential risk reduction presuppose a particular moral theory. See the ethics of existential risk.
Separately, the dilemma you raise isn’t specific to existential risk reduction. E.g. one can also describe imaginary scenarios in which trillions and trillions of sentient beings exploited for human consumption could be spared lives filled with suffering only if we do something horrendous to innocent people. And all reasonable moral theories, not just utilitarianism, must grapple with these dilemmas.
Maybe it’d be good to link from here to this collection of relevant things I’ve written, since some are shortforms or are on LessWrong (and thus I can’t just give them the The Precipice tag).
But I feel squeamish about unilaterally adding mention of my own stuff too much, so I’ll let someone else decide, or maybe return and do it later if it still seems like a good idea to me then.
Just saw this—added.