Toby Ord: Fireside Chat and Q&A

Link post

If all goes well, human history is just beginning. Humanity could survive for billions of years, reaching heights of flourishing unimaginable today. But this vast future is at risk. For we have gained the power to destroy ourselves, and our entire potential, forever, without the wisdom to ensure we don’t.

Toby Ord explains what this entails, with emphasis on the perspective of humanity — a major theme of his new book, The Precipice.

Toby is a philosopher at Oxford University’s Future of Humanity Institute. His work focuses on the big-picture questions facing humanity: What are the most important issues of our time? How can we best address them?

Toby’s earlier work explored the ethics of global health and global poverty. This led him to create an international society called Giving What We Can, whose members have pledged over $1.4 billion to highly effective charities. He also co-founded the wider effective altruism movement, encouraging thousands of people to use reason and evidence to help others as much as possible.

Below is a transcript of a Q&A with Toby, which we’ve lightly edited for clarity. You can also watch it on YouTube and read it on effectivealtruism.org.

The Talk

Interviewer: Hi, everyone. We’re here at the Future of Humanity Institute with Toby Ord, whose new book, The Precipice, is coming out in America on the 24th of March [2020]. We’re going to run through a few questions with Toby over the next 30 minutes or so, and we hope you enjoy it.

Toby, your book is written for a broad audience. What would you like to tell EAs [members of the effective altruism movement] in particular about it?

Toby: It was a challenge writing it and trying to work out which audience to pitch it to and how to get that to work. In fact, I thought for a while about whether to write an academic book or a trade book. I ended up doing the most serious book you’re allowed to do within the trade book [genre]. In fact, I don’t know how they let me get away with it, but it’s 49% endnotes and appendices.

That’s actually kind of my trick for trying to write two books in one. The main text takes you through a more streamlined version of all of the arguments. But if you want to know more and get to the cutting edge of these issues, you’ll get there by delving into the appendices and endnotes.

One thing I’d recommend: If you find it immensely frustrating to flip back and forth between endnotes and the text, just put an extra bookmark in the endnotes. And then it’s easy to check what’s going on. And I made sure to include a lot of additional, useful, fascinating material in the endnotes. They don’t just cover which page number citations came from.

I think that I’ve managed to [create] the kind of book that you can give to any interested person. If there are EAs who have always wanted to explain to friends and family why they care about existential risk, this is a book that could be given to them and should work wonders. For similar reasons, I’m hoping to give it to various people in government. But it also should take things to the cutting edge in terms of research on existential risk.

In addition, I think that a lot of the framing and conversation around existential risk within effective altruism hasn’t been as convincing as it could be. In part, it often ends up making arguments about very small probabilities of very large benefits, which some people find convincing, and others find utterly unconvincing. I’ve tried hard to show how robust the case is for existential risk — to show that it can be based on considerations about the long-term future that’s at stake, but also on considerations about our past in particular. Some of these other considerations can appeal to a much wider range of people.

Also, I think often I’ve presented information in a contrarian way. You might normally think that global poverty is a really important issue, but [I’ll point out] another one that could be even more important. I think that there’s a natural, obvious way of making the case, and I’ve tried to do that; I’ve tried to provide a lot of ideas for how EAs can talk about longtermism and existential risk in ways that I hope will be significantly more convincing to the wider world.

Interviewer: If you could only convey one idea from your new book to people who are already heavily involved in longtermism, what would it be?

Toby: It’s hard to choose, but there’s an idea in the book involving what we call “existential risk factors.” When we normally think about existential risk in our community, we often break it into vertical silos. We think of asteroids and comets, and then supervolcanoes, pandemics, AI, and various other risks. And we try to divide up all of the risks into [the appropriate] categories based on their principal causes.

But there’s another way to do this. I was actually inspired a bit by my earlier work in global health. The people studying the global burden of disease had this really clever way of doing things. At one level, they divide all ill health in the world into vertical silos of the different causes of the ill health: cancer, heart attacks, and so on. But they also wanted to be able to ask questions such as “What would happen to the total amount of ill health in the world if you got rid of smoking tomorrow?” or “How many disability-adjusted life years are due to alcohol use?” They ask these kinds of cross-cutting questions. And they showed that you can ask them quite well.

You can do the same with existential risk. [Consider the example of] great-power war, which I think is a key risk factor. You could say, “How much existential risk is there in the world?”, and then, “How much would that go down if we eliminated great-power war this century, compared to the status-quo amount of chance of great-power war?” I think that this would actually make a large difference; it might remove as much as a tenth of all of the existential risk over the century. And because I think that there is something like a one-in-six chance of existential risk this century, the existential risk would go down by more than a whole percentage point if great-power war was no longer an issue.

Then, you can see that great-power war could be, in some sense, contributing as much or more existential risk than all natural risks combined — and perhaps more than the current anthropogenic risks combined. It’s a helpful lens, because we often think about people working either on existential risk or not. And this helps us to see that there are wider causes in society, which are doing things to change existential risk by macroscopic amounts. It’s also mathematically very nice, because these percentages can be directly compared between risks and risk factors.

Interviewer: Excellent. So on balance, what do you think is the probability that we’re close to a “hinge of history” — either right now, this decade, or this century?

Toby: So a central idea in the book is “the precipice,” which is the name for our time. And my claim in the book, drawing on the work of others — particularly for some of the framing, and particularly from Carl Sagan — is that over the long run of human history, we’ve always been subject to natural risks. That gives us a low baseline or background level of risk. But human progress has, through the development of advanced technology that started with nuclear weapons, enabled us to be so powerful that we’re now imposing risks upon ourselves. These are larger than the background level of natural risk.

I think it’s an escalating process as technology improves, and it can’t go on forever. If the risks keep increasing, then that could be the end for us within a small number of centuries. But if we bring the risks back down again, then we could continue to survive and flourish. I hope and think that we will.

If this story is even roughly right — I’m not 100% certain that it is — then this would be the hinge of history. I hope it isn’t. But the evidence does seem quite compelling to me. So I would say there’s a high chance that this is a hinge of history — something like a three-quarters chance.

Interviewer: Okay. And do you think there are any actions that would obviously decrease existential risk?

Toby: Yeah. There are a lot of different actions we can take. They’re sprinkled throughout the book; I provide some examples. I think many people writing books like this on a new, big problem tend to leap to solutions, but it’s not always the case that those who can identify and explain the problems have the best solutions. Often, they’re at too early a stage for that. So I’ve tried to keep my advice focused on the safest things. I list them all in an appendix, for those who want to see several bullet points about things that we can do.

A few good examples that are pretty safe are to renew the New START nuclear disarmament treaty between the US and Russia. It’s due to expire next year. The signs at the moment suggest that it’s not planned to be renewed, which I think is crazy. If you’ve seen [data] on how nuclear weapons have come down over time, it’s partly due to New START. Renewing it would robustly decrease existential risk.

There are plenty of other solutions. In the case of pandemics, I think that developing technology for disease surveillance in order to be able to tell when a patient has mysterious symptoms — to work out what pathogens are in them, and to sequence them all and understand if there are any novel pathogens — would be extremely helpful against biological existential risk.

But we shouldn’t always demand that solutions be ideas for which there are no skeptical arguments. You occasionally hear people say things like, “But if we made better shelters to avoid nuclear winter, it would lower the cost of going to war.” I think it’s relatively unusual that safety-oriented solutions have net negative effects due to people taking more risk-seeking behaviors. Normally, that only removes some part of the safety that you’ve created — unless there’s something whose appearance of safety is completely out of alignment with the actual safety it grants.

Otherwise, they probably do improve safety. And then the question shouldn’t focus on the robustness of the solutions; it shouldn’t be “Is there any skeptical argument that this solution could actually make things worse?” It should focus on the balance of probabilities: “Will it make things better on one of the most important problems?” That’s the key question.

Interviewer: Do you think that climate change has been neglected in the EA movement, and what are some options that seem as if they could have a very large impact and steer us in a better direction regarding climate change?

Toby: Yes, I think that it has been neglected within EA. It is the existential risk that receives the most attention overall. I call it an existential risk [in the book].

It could be the case that climate change doesn’t really pose an existential catastrophe; it could pose a human catastrophe of unprecedented scale, but not an existential one. That’s possible, but it’s quite hard to rule out that it could lead to the permanent failure of humanity. And that subjective chance that it could be the end is what I think makes something an existential risk. So I do consider it an existential risk.

I think that EA has had a rather strange reaction to climate change that’s not quite right. I was talking to Will MacAskill about this recently, and he strongly [agrees]. Ultimately, the reaction has been along the lines of “I guess that could be important, but we’ve found the numbers on these other things, which seem to be more important.” Whereas actually, people [focused on climate change] are really taking the long term seriously.

They’ve noticed that there’s a chance of irrevocable losses, perhaps at a merely catastrophic scale, and perhaps at an existential scale, where things could go wrong for all time. They’re some of the only people in the world who’ve really [heard] a lot of this message about existential risk, and are really feeling it. It could be that there are other risks which are even higher, or that will be in the near future, and that are even more important to focus on. But in that case, the people [speaking out about climate change] have got things mostly right, and they’re much closer to understanding some of this bigger picture that we’re talking about. Therefore, I think that the community has been strangely neglectful of such interest in others.

Climate change is probably the biggest cause for doing good in the world at the moment. Therefore, we need to focus on some aspects of it that are neglected and that are particularly relevant to existential risk. Specifically, we can focus on how to quantify the uncertainties about the amount of warming that we’ll receive, particularly to do with climate sensitivity and also the feedback effects. How likely is extreme warming? And what will the consequences be?

You sometimes see reports where people have looked at how bad it would be if we had six degrees of warming. Unfortunately, we may have substantially more than six degrees of warming, and I’ve never seen reports on that. So trying to understand these types of tail-risk possibilities, I think, is the most important way that we could help.

Interviewer: So what’s one book that you think most EAs have not yet read, but you think that they should (other than The Precipice, of course)?

Toby: I think that less than half of EAs have read Doing Good Better by Will MacAskill, which I think is a mistake. They should read it. It’s probably the most widely read book by EAs, but I think there are others who think, “Oh, I’m an EA, I know all of this stuff.” But when I was reading it, I certainly found stuff that was new to me and that was very good. I think it’s a very well done book.

And for people who have already read that and don’t like that answer, I would say Algorithms to Live By, by Brian Christian. It’s a fascinating book. It sounds like it’s just a gimmick — to have looked at questions in computer science and the ways that the algorithms in computer science deal with these questions, and then try to apply this to everyday life. But actually, it takes something that could have been a gimmick in other hands and turns it into something that is both very important in how we think about the world and our own lives, and in some cases, profound.

Interviewer: Excellent. So moving on, there are many ways that technological development and economic growth could potentially affect the long-term future. What do you think is the overall sign of economic growth?

Toby: The way I put the question of technology in the book — and again, this is something that Carl Sagan said earlier — is that humanity has seen massive amounts of technological growth. It has been explosive not only since the Industrial Revolution, but also before that. We moved from being a typical primate to, even before the Industrial Revolution, exerting massive control and power over the world. And the problem that we’re in at the moment is that the risks that are the highest are being posed by our technology.

But the issue is not so much a surplus of technology as a deficiency of wisdom. While technological progress has grown exponentially, our wisdom has grown only falteringly, if at all. What we need is for wisdom to catch up.

If the whole world were able to make these decisions together in a unified manner, and yet we still weren’t wise enough to just improve our wisdom, then maybe the answer would be to go a bit slower on technology. But I think that that’s something almost no one wants to do. It’s very hard to make happen. And I think that if the few people who care about existential risks or the long-term future were to spend their efforts on that, that that would be a real waste of their talents.

I think another aspect of the issue is that humanity probably can’t fulfill our potential without improved technology. For one thing, the natural risks would get us. We’d probably be doomed to [enduring for] as long as we’ve had so far — another few hundred thousand years. We can survive for millions or hundreds of millions of years, or even billions of years, but only through technology. So it’s a complicated question. It’s both causing the current problems, but it’s also the solution to the future.

One could then try to ask some questions at the margin — for example, “If technology or economic growth were to increase or decrease, would that be good or bad?” That’s the question with a sign on it. I don’t really know the answer to that, but I think in some ways, it’s the wrong question. The key is that it’s second-order. For those people who are focused on making sure that we have a long-term future, they should do things which are much more directly related to that, rather than trying to do things which would either enhance or retard growth. So I think that’s the best way to see it, that whatever the sign is, the effect size is quite small compared to the magnitude of work that’s directly on existential risk.

Interviewer: What are your views on the prioritization of extinction risk versus other long-termist interventions or causes?

Toby: I think that extinction risk, or existential risk more broadly, is the most important area, but I’m not sure of that. I think Nick Beckstead first raised this question in his PhD thesis, which is still a great work. I’d encourage people to look that up.

A lot of the arguments that existential risk could be hugely important because it affects the entire long-term future could also apply to various other things that affect the entire long-term future. But we know of fewer of these. For many of them, it could be that if someone just did them a little bit later, they would still affect the long-term future. The real difference would be [small, only noticeable for the few years before the action took place].

So I’m really unsure about these things. But it’s an area where I strongly encourage people to do research and try to find out more about it, because we could be missing some really important things there.

Interviewer: Yeah, and there’s another EA Global: London 2019 talk by Tyler John about possible other long-termist interventions.

Toby: Oh, fantastic.

Interviewer: Excellent. And so suppose that your life’s work ended up having a negative impact. What’s the most likely scenario under which you think this could happen?

Toby: This is something that I have been concerned about at various times, both for myself, and also in thinking through the early days of Giving What We Can and CEA [the Centre for Effective Altruism]. I think it’s easier than people might think for your life’s work to end up having a negative impact if you’re appropriately [evaluating] it and thinking about the counterfactuals.

I think the types of things I’m doing have pretty robust arguments that they’re good, so it’s [less likely to be a case of] “everything I thought was good turned out to be bad.” But the easiest way for [your life’s work to have a negative impact] is to crowd out something better.

I thought about that a lot when writing this book, actually. I wanted to make sure that there wasn’t anyone else who should be writing the book instead of me. I spent some time researching that. I talked to a lot of people about it, because if you do something that’s very good, but crowd out something that was extremely good, then your overall impact could be extremely bad.

Interviewer: If you could convince a dozen of the world’s best philosophers who aren’t already doing EA-aligned research to work on topics of your choice, which questions would you want them to investigate?

Toby: I would want them to be thinking about what the most important moral issues in the world are — or could be — particularly if we haven’t found them all. If we’re missing major aspects of what could be good, then we need to understand that, as well as how we could be doing something terrible. That’s something that I think is a really important issue.

A lot of moral philosophy takes practices which are current today. It looks at something like lying or stealing, or even some kind of subtle practices, such as new ways people engage with one another online, say, or fake news. Philosophers then try to understand that practice and get to the bottom of what’s wrong and right with it, how it works, and what’s particular about it. But they’re not considering that maybe there are just a lot of practices we’re not even engaging in, which are much more important than the things we actually are doing or talking about.

Another more central [answer] is I’d get them to look at existential risk. There are very few people working on existential risk in general — almost none. This is something that was very apparent to me while writing the book. And there are a large number of key insights about existential risks that have been uncovered over time — many of which appeared in Nick Bostrom’s 2002 paper, in which he introduced existential risk — but there were many before that, and there have been some after.

A lot of these considerations, when you look at them, can be explained in just a sentence or two, followed perhaps by an additional paragraph explaining to people why the issue is so important. They’re very clear. I think that there must be many more of these insights out there. And they’re the type of thing that EAs might be able to find just by thinking seriously about these issues. I think I found a few more while writing the book, and I think that there’s more of them out there. So I would encourage philosophers and also other people to search for these insights.

Interviewer: And now, final question for you, Toby. What do you like to do during your free time?

Toby: It’s been a while since I’ve had a lot of that. But there is one thing that has probably been the biggest use of my free time since I started writing on this topic a few years ago. I got really interested in photography of the Earth from space. There are various famous images like the “Blue Marble” image from Apollo 17 and also the “Earthrise” image, and I had just seen some amazing photography of Jupiter and Saturn. I wondered, “What are the best photos of the Earth?”

I was searching around, and I found that the photos were fairly disappointing. The highlights were blown out. There were just large areas of cloud that looked straight white, and it kind of broke all the rules of good photography. And I thought, “Oh, surely there are better images of the Earth than this.”

I found that ultimately, there were very few. All the best images were taken during the Apollo program, because humans have to be much farther away from the Earth than the Space Station is in order to get a photo. If this is the Earth, the [International] Space Station is here. [Toby demonstrates relatively close proximity by making a fist to represent the Earth and pointing close to one of his knuckles to represent the ISS.]

The trip to the Moon was actually the ideal time to get photos. And they took proper cameras. All of the other photographs taken by a spacecraft are taken with digital cameras. They say, “Oh, we treated the ultraviolet as if it was blue and infrared as if it was red, and then we kind of composed this picture.” It doesn’t look how the Earth actually appears. But the Apollo program took Hasselblad cameras and Kodachrome or Ektachrome film. They were great cameras.

Eventually I found photographs online in archives, and then searched through more than 15,000 of them — every photograph from the Apollo program — in my evenings. I found that I could actually get some of them, restore them digitally from the scans, and produce images that were substantially better than anything that’s publicly available. I found images that I’ve never seen versions of, where the original was massively overexposed or something, but you could correct it all in the darkroom and bring to life what was amazing about these photographs.

That was something that really inspired me. I would sit there in the dark looking at these visions of Earth from space. And actually, on this US front cover [of The Precipice], this is one such image from Apollo 12, which produced, I think, the best images of any of the programs. It’s one of an Earthrise that I thought was just breathtaking, and I somehow managed to convince Hachette [Toby’s publisher] to put it on the US cover.

Trying to restore these photos was kind of a spiritual experience. I tried to take these ethereal moments that people had had — these 24 people who’ve been to the Moon — and capture it and bring it back. I’ll be releasing all of those to the public in the future.

Interviewer: Oh, fantastic. So should people go to your website for that?

Toby: I’ll set something up. Once the dust has settled with the book, then I will start up another website, I think, and release them all. I think that the folks at NASA will be pretty excited to get nice versions of these images that their astronauts took — such great versions that it’s just a shame that they haven’t been done justice.

Interviewer: Thank you very much for making time to do this recording for us, Toby. We’ve got lots of people watching all across the world who I’m sure will really enjoy listening to you answering those questions. I believe, as well, you’ll be doing a written AMA [“ask me anything”] later this year. So watch for that coming later this year on the EA Forum.

Otherwise, thank you very much, Toby.

Toby Ord: The Precipice — existential risk and the future of humanity

Below is a transcript of Toby’s related talk on The Precipice. We’ve lightly edited the talk for clarity. You can also watch it on YouTube and read it on effectivealtruism.org.

In the grand course of human history, where do we stand? Could we be living at one of the most influential times that will ever be? Our species, Homo Sapiens, arose on the savannas of Africa 200,000 years ago. What set us apart from the other animals was both our intelligence and our ability to work together, to build something greater than ourselves. From an ecological perspective, it was not a human that was remarkable, but humanity.

Crucially, we were able to cooperate across time, as well as space. If each generation had to learn everything anew, then even a crude iron shovel would have been forever beyond our reach. But we learned from our ancestors, added innovations of our own, and passed this all down to our children. Instead of dozens of humans in cooperation, we had tens of thousands, cooperating across the generations, preserving and improving ideas through deep time. Little by little, our knowledge and our culture grew.

At several points in humanity’s long history, there has been a great transition — a change in human affairs that accelerated our progress and shaped everything that would follow.

Ten thousand years ago was the Agricultural Revolution. Farming could support 100 times as many people on the same piece of land, making much wider cooperation possible. Instead of a few dozen people working together, we could have millions. This allowed people to specialize in thousands of different trades. There were rapid developments in institutions, culture, and technology. We developed writing, mathematics, engineering, law. We established civilization.

Four hundred years ago was the Scientific Revolution. The scientific method replaced a reliance on perceived authorities with careful observation of the natural world, seeking simple and testable explanations for what we saw. The ability to test and discard bad explanations helped us break free from dogma, and for the first time, allowed the systematic creation of knowledge about the workings of nature. Some of this newfound knowledge could be harnessed to improve the world around us. So the accelerated accumulation of knowledge brought with it an acceleration of technological innovation, giving humanity increasing power over the natural world.

Two hundred years ago was the Industrial Revolution. This was made possible by the discovery of immense reserves of energy in the form of fossil fuels, allowing us access to a portion of the sunlight that shone upon the earth over millions of years. Productivity and prosperity began to accelerate, and a rapid sequence of innovations ramped up the efficiency, scale, and variety of automation, giving rise to the modern era of sustained growth.

But there has recently been another transition that I believe is more important than any that have come before. With the detonation of the first atomic bomb, a new age of humanity began. At that moment, a rapidly accelerating technological power finally reached the threshold where we might be able to destroy ourselves — the first point where the threat to humanity from within exceeded the threats from the natural world. A point where the entire future of humanity hangs in the balance. Where every advance our ancestors have made could be squandered, and every advance our descendants may achieve could be denied.

These threats to humanity and how we address them define our time. The advent of nuclear weapons posed a real risk of human extinction in the 20th century.

With the continued acceleration of technology, and without serious efforts to protect humanity, there is strong reason to believe the risk will be higher this century, and increase with each century that technological progress continues. Because these anthropogenic risks outstrip all natural risks combined, they set the clock on how long humanity has left to pull back from the brink. If I’m even roughly right about their scale, then we cannot survive many centuries with risk like this. It is an unsustainable level of risk. Thus, one way or another, this new period is unlikely to last more than a small number of centuries. Either humanity takes control of its destiny, and reduces the risk to a sustainable level, or we destroy ourselves.

Consider human history as a grand journey through the wilderness. There are wrong turns and times of hardship, but also times of sudden progress and heady views. In the middle of the 20th century, we came through a high mountain pass and found that the only route onward was a narrow path along the cliff side, a crumbling ledge on the brink of a precipice. Looking down brings a deep sense of vertigo. If we fall, everything is lost. We do not know just how likely we are to fall. But it is the greatest risk to which we have ever been exposed. This comparatively brief period is a unique challenge in the history of our species.

Our response to it will define our story. Historians of the future will name this time, and school children will study it. But I think we need a name now. I call it “the precipice.” The precipice gives our time immense meaning. In the grand course of history, if we make it that far, this is what our time will be remembered for: for the highest levels of risk and for humanity opening its eyes, coming into its maturity, and guaranteeing its long and flourishing future. This is the meaning of our time.

I’m not glorifying our generation, nor am I vilifying us. The point is that our actions have uniquely high stakes. Whether we are great or terrible will depend upon what we do with this opportunity. I hope we live to tell our children and grandchildren that we did not stand by, but used this chance to play the part that history gave us.

Humanity’s future is ripe with possibility. We’ve achieved a rich understanding of the world we inhabit, and a level of health and prosperity of which our ancestors could only dream. We have begun to explore the other worlds and heavens above us, and to create virtual worlds completely beyond our ancestors’ comprehension. We know of almost no limits to what we might ultimately achieve.

Human extinction would foreclose our future. It would destroy our potential. It would eliminate all possibilities but one: a world bereft of human flourishing. Extinction would bring about this failed world and lock it in forever. There would be no coming back.

But it is not the only way our potential could be destroyed. Consider a world in ruins, where a catastrophe has done such damage to the environment that civilization has completely collapsed and is unable to ever be reestablished. Even if such a catastrophe did not cause our extinction, it would have a similar effect on our future. The vast realm of futures currently open to us would have collapsed to a narrow range of meager options. We would have a failed world with no way back.

Or consider a world in chains, where the entire world has become locked under the rule of an oppressive totalitarian regime, determined to perpetuate itself. If such a regime could be maintained indefinitely, then descent into this totalitarian future would also have much in common with extinction — it would offer just a narrow range of terrible futures remaining and no way out.

What all of these possibilities have in common is that humanity’s once soaring potential would be permanently destroyed. [It would mean] not just the loss of everything we have, but everything we could have ever achieved. Any such outcome is called an “existential catastrophe” and the risk of it occurring an “existential risk.”

There are different ways of understanding what makes an existential catastrophe so bad. In The Precipice, I explore five different moral foundations for the importance of safeguarding humanity from existential risks:

1. Our concern could be rooted in the present — the immediate toll such a catastrophe would take on everyone alive at the time it struck.
2. It could be rooted in the future, stretching so much further than our own moment — everything that would be lost.
3. It could be rooted in the past, on how we would fail every generation that came before us.
4. We could also make a case based on virtue, on how by risking our entire future, humanity itself displays a staggering deficiency of patience, prudence, and wisdom.
5. We could make a case based on our cosmic significance, on how this might be the only place in the universe where there’s intelligent life, the only chance for the universe to understand itself, on how we are the only beings who can deliberately shape the future toward what is good or just.

Thus, the importance of protecting humanity’s potential draws support from a wide range of ideas and moral traditions. I will say a little more about the future and the past.

The case based on the future is the one that inspires me most. If all goes well, human history is just beginning. Humanity is about 200,000 years old, but the Earth will remain habitable for hundreds of millions more — enough time for millions of future generations. Enough to end disease, poverty, and injustice forever. Enough to create heights of flourishing unimaginable today. And if we could learn to reach out further into the cosmos, we could have more time yet, trillions of years, to explore billions of worlds. Such a lifespan places present-day humanity in its earliest infancy. A vast and extraordinary adulthood awaits. This is the long-termist argument for safeguarding humanity’s potential: Our future could be so much longer and better than our fleeting present.

There are actions that only our generation can take to affect that entire span of time. This could be understood in terms of all of the value in all of the lives in every future generation (or in many other terms), because almost all of humanity’s life lies in the future. Therefore, almost everything of value lies in the future as well: almost all of the flourishing, almost all of the beauty, our greatest achievements, our most just societies, our most profound discoveries. This is our potential — what we could achieve if we pass the precipice and continue striving for a better world.

But this isn’t the only way to make a case for the pivotal importance of existential risk. Consider our relationship to the past. We are not the first generation. Our cultures, institutions and norms, knowledge, technology, and prosperity were gradually built up by our ancestors over the course of 10,000 generations. Humanity’s remarkable success has been entirely reliant on our capacity for intergenerational cooperation. Without it, we would have no houses or farms. We’d have no traditions of dance or song, no writing, no nations. Indeed when I think of the unbroken chain of generations leading to our time, and of everything they have built for us, I’m humbled. I’m overwhelmed with gratitude, shocked by the enormity of the inheritance and at the impossibility of returning even the smallest fraction of the favor. Because a hundred billion of the people to whom I owe everything are gone forever. And because what they created is so much larger than my life, than my entire generation. If we were to drop the baton, succumbing to an existential catastrophe, we would fail our ancestors in many different ways. We would fail to achieve the dreams they hoped for as they worked toward building a just world. We would betray the trust they placed in us, their heirs, to preserve and pass on their legacy. And we would fail in any duty we had to pay forward the work they did for us, to help the next generation as they helped ours.

Moreover, we would lose everything of value from the past that we might have reason to preserve. Extinction would bring with it the ruin of every cathedral and temple, the erasure of every poem in every tongue, the final and permanent destruction of every cultural tradition the earth has known. In the face of serious threats of extinction or of a permanent collapse of civilization, a tradition rooted in preserving or cherishing the richness of humanity would also cry out for action.

We don’t often think of things at this scale. Ethics is most commonly addressed from the individual perspective: What should I do? Occasionally, it is considered from the perspective of a group or nation, or even more recently, from the global perspective of everyone alive today. We can take this a step further exploring ethics from the perspective of humanity — not just our present generation, but humanity over deep time, reflecting on what we achieved in the last 10,000 generations and what we may be able to achieve in the eons to come.

This perspective is a major theme of my book. It allows us to see how our own time fits into the greater story and how much is at stake. It changes the way we see the world and our role in it, shifting our attention from things that affect the present moment to those that could make fundamental alterations to the shape of the long-term future. What matters for humanity, and what part in this plan should our generation play? What part should each of us play?

The Precipice has three chapters on the risks themselves, delving deeply into the science behind them. There are the natural risks, the current anthropogenic risks, and the emerging risks. One of the most important conclusions is that these risks aren’t equal. The stakes are similar, but some risks are much more likely than others. I show how we can use the fossil record to bound the entire natural risk to about a one-in-10,000 chance per century. I judge the existing anthropogenic risk to be about 30 times larger, and the emerging risk to be about 50 times larger — roughly one in six — over the coming century. It’s like Russian roulette.

This makes a huge difference when it comes to our priorities, though it doesn’t quite mean that everyone should work on the most likely risks. We also care about their tractability and neglectedness, the quality of the opportunity at hand, and for direct work, your personal fit.

What we do with our future is up to us. Our choices will determine whether we live or die, fulfill our potential or squander our chance at greatness. We are not hostages to fortune. While each of our lives may be tossed about by external forces — a sudden illness or outbreak of war — humanity’s future is almost entirely within humanity’s control. In The Precipice, I examine what I call “grand strategy for humanity.” I ask, “What kind of plan would give humanity the greatest chance of achieving our full potential?”

I divide things into three phases. The first great task for humanity is to reach a place of safety, a place where existential risk is low and stays low. I call this “existential security.” This requires us to do the work commonly associated with reducing existential risk by working to defuse the various threats. It also requires putting in place the norms and institutions to ensure existential risks stay low forever. This really is within our power. There appear to be no major obstacles to humanity lasting many millions of generations. If only that were a key global priority! There are great challenges in getting people to look far enough ahead and to see beyond the parochial conflicts of the day. But the logic is clear and the moral argument is powerful. It can be done, but that is not the end of our journey.

Achieving existential security would give us room to breathe. With humanity’s long-term potential secured, we would be past the precipice, free to contemplate the range of futures that lie open before us. And we could take time to reflect upon what we truly desire, upon which of these visions for humanity would be the best realization of our potential. We can call this “the long reflection.”

We rarely think this way. We focus on the here and now. Even those of us who care deeply about the long-term future need to focus most of our attention on making sure we have a future. But once we achieve existential security, we will have as much time as we need to compare the kinds of futures available to us and judge which is best.

So far, most work in moral philosophy has focused on negatives, on avoiding wrong actions and bad outcomes. The study of positives is at a much earlier stage of development. During the long reflection, we would need to develop mature theories that allow us to compare the grand accomplishments our descendants might achieve with eons and galaxies as their canvas. While moral philosophy would play a central role, the long reflection would require insights from many disciplines. For it isn’t just about determining which futures are best, but which are feasible in the first place — and which strategies are most likely to bring them about. This would require analysis from science, engineering, economics, political theory, and beyond.

Our ultimate aim, of course, is the final step: fully achieving humanity’s potential. But this can wait upon step two: the serious reflection about which future is best and how to achieve that future without any fatal missteps. And while it wouldn’t hurt to begin such reflection now, it is not the most urgent task. To maximize our chance of success, we need first to get ourselves to safety, to achieve existential security. Only we can make sure we get through this period of danger and give our children the very pages upon which they will author our future.

No comments.