The Precipice: a risky review by a non-EA

At the request from Fernando Moreno of EA Brazil, Leandro Bellato, who is not from the EA movement, kindly took the time to read Ord’s last book and write the following.

Leandro wrote it in Brazilian-Portuguese. Fernando Moreno translated and Gavin Taylor reviewed it before publishing it on the Forum.

The original publication can be read here.


The Precipice: a risky review

Fiction, from literature to cinema and passing through the mythical eschatology (the field concerned with the final destiny of humankind) of various religions, often has a theme of the increasingly close end of humanity (or of all life on Earth). Several groups of people interested in this subject also frequently advocate the idea that time is running out and we will soon face the unavoidable end of humanity.

We have plenty of reasons to consider this topic. Theological eschatologies offer us a huge collection of means and reasons why humanity would soon find its damnation, but most of us no longer take such things seriously, at least not at an institutional level. Secular eschatologies, however, offer us an equally rich (and somewhat more palpable) collection of means and motives by which humanity would soon find its end.

Here Toby Ord, an Australian ethicist who teaches at Oxford, writes an interesting book on how to weigh up (and calculate) the risks that surround humanity and that could interrupt humanity’s cosmic journey. The Precipice: Existential Risk and the Future of Humanity is daring and provocative, well written and, despite the title, full of bittersweet optimism.

Ord presents himself as a philosopher by education, specializing in ethics, and very much interested in preventing the calamities that may devastate humanity, but in an efficient and reasonable way. Although he quickly lists some of the important ills that make the lives of millions of people miserable, he quickly points out the progress already achieved and the expectation of eradicating hunger, misery and certain diseases in the not so distant future. He reinforces that there is still much to be done and that much of the progress we have already achieved can quickly be undone as a result of possible political or economic crises.

Years ago, however, Ord abandoned his efforts in practically promoting the improvement of the present world. Now he invests his efforts in understanding the risks that threaten the whole future of humanity and how they can be mitigated. The book in question presents the author’s point of view on the importance of the subject, what is at stake, lists and comprehensively evaluates a vast catalogue of risk types, leaving several dozen pages to digress about the distant future of humankind.

After the introduction, in which Ord presents himself and discusses his personal motivations, very briefly summarizing what will be presented in the book and its purpose, we are faced with a book divided into three parts and, after it, a list of uninviting appendices which are much less interesting than the main text. The appendices express exhaustive and rigorous arguments taken as undisputed matters in the main text. He did well to separate much of this discussion, which is of little value to most readers, especially the ethical foundations, into appendices at the end of the main body.

Part 1: what we have to lose (the stakes)

In this first part, Ord presents his central argument about the pivotal period we could be going through and the concept, that is actually a bit intuitive, of existential risks. A quick and somewhat simplistic recapitulation of human history, from the distant Paleolithic, precedes a speculative discussion on the almost infinite possible futures of humanity. The first chapter closes with the argument that never before in history has there been so much weight in the hands of humanity, whose decisions in the next one or two hundred years could foreclose the cosmic destiny of human beings and all that they have which is intrinsically different from other earthly beings.

Some observations on the first chapter will be provided later, but the first great novelty of the book is in the second chapter, which presents us with the concept of existential risks. The concept is interesting, simple and even intuitive, but it is still an important and thought-provoking tool. Ord abuses the use of metaphors and fictitious examples, so it seems fitting that a review of this book uses the same devices.

Every time we take individual risks to our safety, well-being and survival, we are not only risking the continuity of our own lives but potentially wasting all the achievements, experiences and joys of our own individual existences. We are also risking the enjoyment of experiences and accomplishments of those who live with us now (and those who will come to know us in the future). And we could prevent the experiences and achievements of our many thousands of descendants. Premature extinction would not only waste the rest of our human potential and our contribution to the human collective but also the human potential of all our descendants, who would not even come to exist.

From the melodramatic and grandiose individual case, whose future potential is not very convincing, we can in fact apply the same reasoning to past populations: had they become extinct, then a good part of current humanity, including us, would not exist. Moreover, if we think of ourselves as belonging to the human community, if we become extinct we will be condemning billions of future generations and trillions of future people to non-existence. From the point of view of humankind, this would be a disaster of a magnitude greater than any other. An occurrence that exterminates 100% of humanity is infinitely worse than one that eliminates 99% of it (and not just 1% worse!).

Returning to the discussion of the first chapter, Ord insistently explains how special humanity is. Given that there is no other species on the planet capable of similar cognitive feats, and that from these cognitive feats ethics and attitudes based on ethical reflection emerge, which enable fruitful experiences and the enjoyment of the good and the beautiful, the possible extinction of humanity would be a loss of value on a cosmic scale. On Earth, at least, there is no other species capable of enhancing the enjoyment of the right, the good and the beautiful, both for itself and for other species. Therefore, from the point of view of history on a cosmic scale, the extinction of humanity would be more painful than the, still regrettable, loss of other species, such as the sunflower or the panda.

Ord strives to prove the point, reinforced in appendices, that humanity matters (and does not only matter to us because we are human) and that, therefore, the future of humanity is of great ethical importance to us. There is also an effort, resumed in appendices, to demonstrate the value of future human lives (which, although not yet existing, are human lives and which, added together over a period of time, makes each living generation committed to the almost infinite people in future generations and indebted to the billions of people from previous generations).

Since the extinction of humanity is infinitely worse than the sum of any other tragedies, special attention must be paid to prevent this from happening. The absence of living humans is not the only way to inflict irreparable harm on all humanity. There are macabre possibilities of humankind being permanently stuck at some stage of development that makes it impossible for it to improve (as if, for example, we never stopped being hunter-gatherers). There are even ways for the human being to become something less valuable, like the “batteries of machines”, as in the movie The Matrix (the author does not suggest this exact example, but he delivers the message in a conveniently concise way).

Let us call the extinction of humankind (or its permanent limitation or loss of value) an existential catastrophe. Once this occurs, it is game over. Either there is no more humanity or all human potential will have been lost forever, even if individuals remotely resembling humans remain. And as already established, this would be an irreparable loss on a cosmic scale.

The risk of an existential catastrophe happening is an existential risk. These are risks of absolute extinction of humanity or permanent limitation of human potential. This is where Ord raises the tone of gravity and seriousness in his work: it is not a question of discussing bad and lamentable occurrences to the human future, such as wars, genocides, famine, torture and slavery. These things all range between the lamentable and the terrible, but they are small in the face of an existential catastrophe: for all of these there are ways to recover or to overcome their harmful effects in the long run, but not so for an existential catastrophe.

There are enormous known and unknown uncertainties when dealing with the probabilities of occurrence of some existential catastrophic event, and Ord is wise to have no pretence of precision in his evaluations. However, the author himself states the inescapable need to try to measure those risks, quantitatively and conservatively, so that the order of magnitude for the risks is, at least, known.

At the end of the first chapter there is the argument that, although it is obvious that mankind has so far been exposed to existential risks, being defenceless and unable to have control over its destiny for most of our existence, it has never before had so many means to respond to them as we do today. Now not only are we more capable of understanding and even acting to deal with existential risks but for the first time, we are ourselves the source of unprecedented existential risks. Therefore, the whole future of humanity, according to the author, would depend on being able to carry it forward safely for the next one or two hundred years: and this will only be possible if we learn how to deal with existential risks, minimizing them in a conscious and systematic way.

Since we come from a past in which we could do little or nothing to ensure humanity’s existence persists, all of the human future and potential ahead of us is now under our responsibility, and we are going through a short period of “it’s all or nothing”, in which existential catastrophe may be preventable by our actions, but equally possible to occur because of them. Hence, we live our times on the brink of a Precipice.

Part 2: risks (the adventure)

In the third to the fifth chapters, Ord presents us with the most interesting part of his work, which is the enumeration and discussion of the main existential risks that threaten the destiny of humanity. The third chapter is dedicated to existential risks of natural origin, such as impacts from celestial bodies, rupture of space-time during cosmic expansion, supervolcanoes, and supernova explosions. The fourth chapter is dedicated to existential risks of anthropogenic origin that already threaten us, while the fifth chapter deals with future existential risks, most of them of anthropogenic origin but which have not yet manifested themselves in an insidious way (although they are expected to in the near future).

Despite the title of the work and the alarming tone at the beginning of the book, Ord’s approach is surprisingly sober and well considered when dealing with the enormous majority of the existential risks that he articulates. The reader will not have nightmares about asteroids colliding with the Earth or global warming extinguishing humanity, but perhaps he will suspect that his smartphone’s spelling checker may be his worst enemy.

An interesting argument regarding the quantitative description of existential risks of natural origin is that we can investigate the past in search of the typical frequency of occurrence for potentially catastrophic events. Thus, it is possible to estimate how often celestial objects collide with our planet and the likelihood of collisions with catastrophic potential. We can refine the initial estimates not only by considering the typical frequency with which these events occurred in the past, but also by analyzing the current situation: there are less than half a dozen asteroids large enough to cause damage similar to that which occurred at the end of the Cretaceous, none of which will cross our orbit in the foreseeable future.

Therefore, existential risks of natural origin are estimated with more certainty and the prediction of their occurrence is more “well-behaved”. Even so, it is necessary to consider that there are “known unknowns”, as we know that we do not know the position, size and orbit of every object in the distant Oort Cloud (where comets come from) and “unknown unknowns”. Although there does not seem to be some massive object on the edges of the Solar System that can divert comets to the interior, perhaps there is something of this type and we do not know. Therefore, the estimates listed by Ord constitute an important initial approximation, not the last word on the subject.

Asteroids and Comets: although the notion that a great impact with a massive celestial body can extinguish humanity is popular, the truth is that occurrences of this kind are quite rare. Considering the average frequency of occurrence of relevant impacts with catastrophic potential, the associated risk is 1/​1M per century (M = million). Although this would be a huge tragedy for humankind, from which we would spend centuries having to recover from the consequences of the impact, we are unlikely to become extinct and will most likely recover from it.

Even larger objects, perhaps capable of eliminating all complex life on the planet, are much rarer: of the millions mapped there are only four. None of the orbits of these four known objects intersect with the terrestrial one in the time interval over which dynamic orbit tracing equations for the Solar System converge to a single solution. Considering this, the probability of a catastrophic impact with one of these four or with a fifth unknown is about 1/​150M per century.

Supervolcanoes: massive volcanic explosions can trigger volcanic winters and even glacial eras, which would undeniably make life difficult for humans. However, with the rudimentary technology of the Paleolithic, humanity survived the powerful explosion of the Toba volcano, and it is probably guaranteed that eruptions of this scale are incapable of extinguishing humanity or permanently compromising its future. So mankind should only fear even greater eruptions.

When thin particulate material is transported to the stratosphere it remains in suspension for months. If there is a lot of particulate material suspended in the high atmosphere, deposition can take years, forming a “veil” of particles around the globe that obscures the light coming from the Sun. This layer of particulate material tends to significantly cool the Earth’s surface, causing years with no summer when it can snow throughout the year, and also makes it difficult for photosynthesis to occur. This occurrence, called volcanic winter, presents risks to human food security, making agriculture difficult or even impossible at high latitudes.

Eruptions powerful enough to trigger mass extinctions are very rare, and it is debatable whether any of them could extinguish humanity. The most dangerous eruptions, the Siberian Traps type, like Yellowstone, last thousands of years and may not immediately, but could, in the long run, cause such damage to the biosphere that humanity is cornered and finds itself unable to survive, taking millennia to decay to some technological stage so rudimentary and with a population so low that other environmental risks may eventually destroy it. Considering the typical frequency of Siberian Traps type of eruptions and the number of known candidates for this situation in the next millennia, there is a risk of 1/​10k per century of this occurring (k = thousand).

Supernovas: massive stars, whose mass is at least five times that of our Sun, end their lives violently: after a long period of expansion of their outer layers, the core loses so much gravitational pressure that it decreases its radioactive productivity. Part of these outer layers are gravitationally attracted to this heavy core and so much mass collapses on it that a very intense nuclear reaction fragments most of this core and the star’s mass into a huge explosion that, for a moment, shines more brightly than an entire galaxy.

Although there are no stars capable of turning into supernovas and interfering with the Solar System in our neighborhood, making supernovas safe astronomical spectacles, some types of supernova are associated with the emission of jets of intense radiation with very short wavelengths (X-rays, gamma rays), capable of crossing galactic distances. If our planet is in the path of one of these, we will have only a short time before we notice the danger, and the planet’s atmosphere will be ionized, perhaps swept away, and its surface potentially “toasted” by the incident radiation beam. In the most serious case, humanity and probably all life on the planet would be killed.

Considering our greater stellar neighborhood and the apparent rotational orientation of most of the more massive of our distant neighbors, the chances of the Earth being hit by such a radioactive beam are at most 1/​1M per century.

More exotic risks are several orders of magnitude less likely, and the total existential risks of natural origin are 1/​10k per century. Given human history on the planet, it is unlikely that any natural event will eliminate us in the next million years. Not only because of the rarity of events with this potential but also because of the enormous human capacity to adapt to the problems arising from them. Therefore, the unprecedented existential risks of anthropogenic origin are more insidious.

Nuclear Weapons: the world has enough nuclear weapons to extinguish life on Earth several times over. Actually, not really. It would undoubtedly be a terrible catastrophe if a nuclear war resulted in the almost simultaneous firing of thousands of the nuclear warheads that exist, with an absurd direct cost in human lives. The survivors would most likely have to deal with a nuclear winter (analogous to the volcanic winter) that, in the worst case scenario, would persist for years, causing enormous agricultural losses and starvation.

The occurrence of a nuclear winter on a planetary scale would cause enormous difficulty or impossibility of farming at high latitudes. This would lead to widespread famines, massive migrations to equatorial regions and lower latitudes, and the possible adoption of nutrition based on fungus production in natural or artificial underground shelters.

Still, it is extremely unlikely that humanity will become extinct in this way. It is equally unlikely that survivors will reach an irrecoverably primitive state. No matter how dire the consequences of large-scale nuclear war may be, it is most likely that, after a profound trauma, humankind will pull itself together in a few decades, at most in a few centuries. At worst, in a few millennia.

The risk associated with human extinction through nuclear weapons is still much higher than the sum of all the existential risks of natural origin. The term “very unlikely” conceals the estimate of 1/​1k per century.

Climate change: technological advances over the past two centuries have enabled humanity to reach an unprecedented level of prosperity and quality of life, by becoming dependent on large-scale energy use. Most of this energy is derived from the burning of fossil fuels, significantly increasing the tropospheric concentration of greenhouse gases and gradually increasing the equilibrium planetary temperature, causing global warming.

The risks of temporary impoverishment, intensification of disputes between nations, episodes of famine and the like make global warming a serious issue to deal with in the coming decades, but it is quite unlikely that its consequences will not be reversible and lead to the extinction of humanity or the permanent damage of its future potential. Considering a very unlikely chain of intense feedback of global warming, replacing the current atmospheric composition with one based on carbon dioxide and water vapour, as on Venus, the maximum chances of global warming definitively compromising the future of humanity is 1/​1k per century.

Environmental Destruction: Global warming is not the only way, and should not even be the most important, in which humankind deteriorates the conditions of the biosphere that it depends upon. Although human beings are capable of maintaining a minimum biome on which they depend for raw materials and food, it is possible that the sum of our degrading interventions to the environment will cause a deep and sudden collapse of the food chains and disconnection of the trophic levels that generate irreparable consequences for humanity. Therefore, the existential risk of environmental destruction to the human future is no greater than 1/​1k per century.

Pandemic (natural): Ord argues that the existential risk of pandemics is not exactly natural, but depends on a series of human innovations and human environmental interventions, which greatly increase their potential consequences. Still, there is the remote possibility that some natural infectious pathogen is so harmful to humans that it destroys the population or permanently keeps it at such low numbers and in small groups that it compromises the future of the human race. This risk is estimated to be 1/​10k per century.

Pandemic (humanly caused): there is a much higher risk that unscrupulous human agents, for perverse or accidental reasons, not only generate a more aggressive, transmissible and lethal pathogen than any natural pathogen, but also deliberately release it. Elements of biological warfare have been used recurrently in warfare operations since at least the Middle Ages, and biological terrorism has a macabre history in the 20th century.

Although no biological weapon has historically been capable of causing pandemics, lab security has many flaws. Lethal pathogens have escaped from laboratories, that studied them with good intentions, countless times. Ord presents interesting and frightening reports of some of these occurrences. Considering the accessibility of protocols for virus gain of function research, as well as the potential for these laboratory pathogens to escape, their use as a biological weapon and terrorism, the existential risk associated with pandemics (anthropological) is 130 per century, being a relevant existential risk to deal with, now and in the future.

Artificial Intelligence (non-aligned): the advent of information technology has revolutionized our way of life and provided a very profound increase in the complexity of relationships between people and institutions, allowing almost instantaneous communication and information processing on an unprecedented scale. Increasingly complex problems are addressed by increasingly elaborate algorithms.

Although I find that Ord tends to mystify what is in fact artificial intelligence, he presents very well the notion of complex algorithms with the ability to detect patterns and learn how to solve complex problems in a way analogous (though only vaguely) to the biological cognitive apparatus. The author briefly recapitulates the impressive advances in the field in order to persuade the reader that although a future of T-800 android assassins is unreasonable, artificial intelligence constitutes a relevant threat to the human future.

Time and time again he uses dramatic expressions, such as the potential of artificial intelligence to steal for itself the manifest destiny of mankind. Not any artificial intelligence, which in itself is a tool mostly beneficial to humanity, but an artificial intelligence designed without the precaution that the performance of its tasks is consistent with the well-being of humanity and the enjoyment of its almost infinite future possibilities. Worse, an artificial intelligence can be maliciously designed to limit the possibilities of humanity, being deliberately hostile and extinguishing humankind, or permanently locking us in a state with less enjoyment of life and freedom.

This is a relevant existential risk that is poorly understood, and will accompany humanity century after century in its cosmic journey. Either as its greatest ally, or as its worst adversary. Even in the present technological human stage of extreme dependence on digital systems, an autonomous and non-aligned artificial intelligence could become an inescapably tyrannical entity, taking the reins of humanity’s destiny.

Probably the most passionate analytical part of the work, Ord evaluates that the existential risk associated with the possibility of non-aligned artificial intelligence for the well-being of humanity is 110 per century.

Dystopias: even if some environmental element, pathogen, or artificial intelligence does not destroy human destiny, people themselves can create complex networks of social bonds that inescapably limit the future possibilities of humanity, locking it in some less valuable form of existence. These are the dystopias.

Human history is full of dictatorial regimes, genocides, bloodthirsty theocracies, and disastrous totalitarianism. For now, no dystopia has proved to be inescapable or capable of foreclosing the entire destiny of humanity, but it is not impossible for information technology, biotechnology, the mass media and future technologies to create an oppressive and limiting social structure that would ensnare our entire species and be inescapable (perhaps by irremediably altering the human genetic code?).

There are several types of dystopias possible, all very speculative. They nestle in three major families of dystopian scenarios: Unwanted but inescapable dystopias, where diverse factors coalesce (without the agents having such intention) in the emergence of a dystopia; imposed dystopias, in which agents promoting a dangerous and limiting ideology rise to power and are capable of creating an irreversible dystopian scenario; desirable dystopias, in which humanity reaches some technological level and an evolutionary trap causes the enormous majority of humans to voluntarily surrender to a scenario that perpetually limits human destiny.

This and other anthropological existential risks together are estimated by Ord at the magnitude of 150 per century. In total, the interaction between anthropogenic existential risks constitutes the greatest threat to the future of humanity, corresponding to the estimate of 16 per century, according to the author.

This value, about 16 per century, explains the seriousness of the existential risks faced by humanity and qualifies the Precipice as a central and decisive, pivotal period of human history. Being several orders of magnitude higher than the remote risks of extinction by natural causes on a cosmic scale, the author proposes that a deep human reflection is necessary to deal responsibly with this threat. However, this same estimate can be seen in a positive way, with humanity having 5 out of 6 chances to proceed to an almost unimaginable scenario of enjoying a great potential future.

Part 3: Future paths (the great speculation)

Ord concludes his work with important and thought-provoking reflections on the calculation of interactions between risks, introducing the concepts of existential risk factors and existential security in chapter six. Chapter seven is mostly devoted to a synoptic discussion on what can now be done at various levels to safeguard humanity from these existential risks, and what long-term measures humanity will have to take to ensure the enjoyment of its destiny. Finally, in chapter eight, the author passionately defends his vision for the future potential of mankind aeons ahead, beyond centuries and millennia and beyond the stellar neighborhood of our Sun, at a time when the Earth itself becomes only vaguely remembered, with the suspicion that it was once the cradle of humanity.

Although the second part of the book is the most instructive, the sixth chapter brings with it important and provoking ideas, as it explores the landscape of existential risks faced by humanity. Previously the main existential risks were presented and a quantitative estimate was associated with each one. However, it is worth noting that several of these risks are interdependent and the interactions between them are complex and interesting.

In several situations, the estimated values for the risks change according to the specific scenario. If the risk of extinction of humanity is quite low in the case of supervolcano eruptions, it becomes significantly more relevant after a nuclear catastrophe or a pandemic that decimated 90% of the population. Considering risk combinations is a complex task, as it is not enough to just add or multiply the estimates set out in the previous chapters, you must analyze a decomposition of the independence and interdependence between risks.

Thus, any occurrences, scenarios or other risks that promote some existential risk are called risk factors. By analogy with everyday life, the risk of developing lung cancer is a given and is quite low in terms of population frequency. But smoking is a habit that promotes this risk, being a risk factor and, in practice and in a mathematical way, it has a risk factor greater than 1, that multiplies an individual’s cancer risk to be greater than that of the general population.

In an analogous manner, occurrences and scenarios that make the occurrence of a risk more difficult (that is, a risk factor smaller than 1) are called safety factors. These are simple concepts to understand, difficult to estimate, but possible to investigate in a clear, objective and quantitative manner. Prudence then indicates that safeguarding humanity consists of minimizing risk factors and maximizing safety factors.

The seventh chapter deals with how humanity can work towards its long-term survival by mitigating existential risks. The basic formula has already been presented, which is minimizing risk factors and maximizing safety factors, but things are much more subtle. What are the main risk and safety factors and how do you choose which risks and factors to invest limited resources in while maximizing results?

The risks are very distinct from each other: there are risks with a semi-permanent “state”, such as vulnerability to cosmic collisions or supervolcano eruptions, and emerging critical risks such as artificial intelligence and global warming. Tiny but persistent risks accumulate over the centuries, while certain risks, even larger ones, are overcome once and for all (or lose relevance exponentially). For example, good global coordination in a peaceful world permanently mitigates (or almost so) the enormous risk posed by nuclear warheads, while the risk from cosmic collisions and supernovae accumulates over billions of years.

The risks also respond differently to the efforts invested. A priori, all things being equal, one must first invest in those risks whose investment of resources results in the greatest mitigation. When investing in that area offers only marginal risks mitigation, one should focus on another risk that has been more neglected. This situation is quite dynamic.

Many risks are naturally solved well by attracting freely available resources, in a market logic. Other risks are more systematically neglected, perhaps because addressing them does not return personal benefits within a human lifetime. For risks like these, it is necessary to create artificial rewards for those who deal with them.

Some safety factors cover several risks, while some risk factors promote others. Effective global coordination promotes peace, which discourages the use of biological or nuclear weapons and facilitates the resolution of problems on a global scale, such as climate change. It is very important when dealing with the grand future of humankind to ensure that the measures implemented are neither irreversible (which constitutes a risk!) nor unilateral.

Perhaps more important than the list of things to do is the list of things not to do, such as not being fanatical about existential risks: annoying others with it, or worse, disdaining the daily concerns of others who do not think day and night about the human long term future. This does not help in promoting the minimization of existential risks, in fact, it hinders people’s adherence to important causes. Among other warnings, Ord explicitly recommends not to be tribal or politically motivated: neither right nor left, neither party nor ideology alone holds the key to the long-term success of humanity which is, above all, rich, diverse, and full of possibilities.

Ord speculates that once existential risks have been mitigated as much as possible, humanity will enter a long phase of reflection about which future among the myriad of possibilities is best and most fulfils humanity’s potential. After this great, perhaps endless, reflection, and during it, humanity will enjoy its cosmic destiny (to which critics could point out as a vision in debt to the idea of manifest destiny).

The last chapter speculates passionately about the infinite future possibilities, for hundreds of millions and several billion years, for the heirs of humanity. Then “we” will be able to sow with endless life and diversity the stars of our galaxy (even of others) and our lineage will persist until the thermodynamic death of the universe, extracting energy from evaporating black holes and dying dwarf stars. Then perhaps there will be no limits to the degree of moral correction, well-being, prosperity, health and personal satisfaction, as well as knowledge, attainable by “humanity” in the future aeons ahead.