If you’re uncertain whether humanity’s future will be net positive, and therefore whether extinction risk reduction is good, you might reason that we should keep civilization going for now so we can learn more and, in the future, make a better-informed decision about whether to keep it going. After all, if we die out, we can never decide to bring humanity back. But if we continue existing, we can always shut everything down later. Call this the option value argument.
I don’t think this argument is very strong. It is exactly in the worlds where things go very badly that the option value argument doesn’t work. The inhabitants of such dystopian worlds are very unlikely to have the ability and/or motivation to carefully reflect and coordinate to stop existing, even if that would be the best thing to do.[1] If they did, why would these worlds be so dystopian?
That is, continuing to exist will not give us the option to stop existing later when most needed. Humanity won’t have that much control or impartially altruistic motivations in worlds where the most severe s-risks occur. If the future is going very badly, we probably won’t decide to end civilization to prevent it from getting worse. The worst s-risks don’t happen when things are going so well that we can stop to reflect and decide to steer the future in a better direction. Many things have to go just right for humanity to achieve this. But if we’re on track to create s-risks, things are not going right.
The important point here is that we can’t rely on future agents to avoid s-risks by default. Reducing extinction risk doesn’t entail s-risk reduction (especially when we consider which world gets saved). Some resources should go toward preventing worst-case outcomes in advance. To be clear, the takeaway is not that we should consider increasing extinction risk, but rather that we should devote some effort toward increasing the quality of the future conditional on humanity surviving.
Below, I’ll list some quotes explaining this point in more detail.
Brauner and Grosse-Holz discuss “Why the ‘option value argument’ for reducing extinction risk is weak”:
Finally, we consider if future agents could make a better decision on whether to colonize space (or not) than we can, so that it seems valuable to let them decide (option value).
...
If we can defer the decision about whether to colonize space to future agents with more moral and empirical insight, doing so creates option value (part 1.3). However, most expected future disvalue plausibly comes from futures controlled by indifferent or malicious agents. Such “bad” agents will make worse decisions than we, currently, could. Thus, the option value in reducing the risk of human extinction is small.
For any future scenario to contain option value, the agents in that future need to surpass us in various ways, as outlined above. This has an implication that further diminishes the relevance of the option value argument. Future agents need to have relatively good values and be relatively non-selfishness to decide not to colonize space for moral reasons. But even if these agents colonized space, they would probably do it in a relatively good manner. Most expected future disvalue plausibly comes from futures controlled by indifferent or malicious agents (like misaligned AI). Such “bad” agents will make worse decisions about whether or not to colonize space than we, currently, could, because their preferences are very different from our (reflected) preferences. Potential space colonization by indifferent or malicious agents thus generates large amounts of expected future disvalue, which cannot be alleviated by option value. Option value doesn’t help in the cases where it is most needed (see footnote for an explanatory example)[45]
Some people have argued that even (very) small credences in upside-focused views, such as 1-20% for instance, would in itself already speak in favor of making extinction risk reduction a top priority because making sure there will still be decision-makers in the future provides high option value. I think this gives by far too much weight to the argument from option value. Option value does play a role, but not nearly as strong a role as it is sometimes made out to be. To elaborate, let’s look at the argument in more detail: The naive argument from option value says, roughly, that our descendants will be in a much better position to decide than we are, and if suffering-focused ethics or some other downside-focused view is indeed the outcome of their moral deliberations, they can then decide to not colonize space, or only do so in an extremely careful and controlled way. If this picture is correct, there is almost nothing to lose and a lot to gain from making sure that our descendants get to decide how to proceed.
I think this argument to a large extent misses the point, but seeing that even some well-informed effective altruists seem to believe that it is very strong led me realize that I should write a post explaining the landscape of cause prioritization for downside-focused value systems. The problem with the naive argument from option value is that the decision algorithm that is implicitly being recommended in the argument, namely focusing on extinction risk reduction and leaving moral philosophy (and s-risk reduction in case the outcome is a downside-focused morality) to future generations, makes sure that people follow the implications of downside-focused morality in precisely the one instance where it is least needed, and never otherwise. If the future is going to be controlled by philosophically sophisticated altruists who are also modest and willing to change course given new insights, then most bad futures will already have been averted in that scenario. An outcome where we get long and careful reflection without downsides is far from the only possible outcome. In fact, it does not even seem to me to be the most likely outcome (although others may disagree). No one is most worried about a scenario where epistemically careful thinkers with their heart in the right place control the future; the discussion is instead about whether the probability that things will accidentally go off the rails warrants extra-careful attention. (And it is not as though it looks like we are particularly on the rails currently either.) Reducing non-AI extinction risk does not preserve much option value for downside-focused value systems because most of the expected future suffering probably comes not from scenarios where people deliberately implement a solution they think is best after years of careful reflection, but instead from cases where things unexpectedly pass a point of no return and compassionate forces do not get to have control over the future. Downside risks by action likely loom larger than downside risks by omission, and we are plausibly in a better position to reduce the most pressing downside risks now than later. (In part because “later” may be too late.)
This suggests that if one is uncertain between upside- and downside-focused views, as opposed to being uncertain between all kinds of things except downside-focused views, the argument from option value is much weaker than it is often made out to be. Having said that, non-naively, option value still does upshift the importance of reducing extinction risks quite a bit – just not by an overwhelming degree. In particular, arguments for the importance of option value that do carry force are for instance:
There is still some downside risk to reduce after long reflection
Our descendants will know more about the world, and crucial considerations in e.g. infinite ethics or anthropics could change the way we think about downside risks (in that we might for instance realize that downside risks by omission loom larger than we thought)
One’s adoption of (e.g.) upside-focused views after long reflection may correlate favorably with the expected amount of value or disvalue in the future (meaning: conditional on many people eventually adopting upside-focused views, the future is more valuable according to upside-focused views than it appears during an earlier state of uncertainty)
The discussion about the benefits from option value is interesting and important, and a lot more could be said on both sides. I think it is safe to say that the non-naive case for option value is not strong enough to make extinction risk reduction a top priority given only small credences in upside-focused views, but it does start to become a highly relevant consideration once the credences become reasonably large. Having said that, one can also make a case that improving the quality of the future (more happiness/value and less suffering/disvalue) conditional on humanity not going extinct is probably going to be at least as important for upside-focused views and is more robust under population ethical uncertainty – which speaks particularly in favor of highly prioritizing existential risk reduction through AI policy and AI alignment.
It has been argued that, under moral uncertainty, the most robustly positive approach to improving the long-term future is to preserve option value for humans and our descendants, and this entails prioritizing reducing risks of human extinction (MacAskill). That is, suppose we refrain from optimizing for the best action under our current moral views (which might be s-risk reduction), in order to increase the chance that humans survive to engage in extensive moral reflection.9 The claim is that the downside of temporarily taking this suboptimal action, by the lights of our current best guess, is outweighed by the potential upside of discovering and acting upon other moral priorities that we would otherwise neglect.
One counterargument is that futures with s-risks, not just those where humans go extinct, tend to be futures where typical human values have lost control over the future, so the option value argument does not privilege extinction risk reduction. First, if intelligent beings from Earth initiate space settlement before a sufficiently elaborate process of collective moral reflection, the astronomical distances between the resulting civilizations could severely reduce their capacity to coordinate on s-risk reduction (or any moral priority) (MacAskill 2022, Ch. 4; Gloor 2018). Second, if AI agents permanently disempower humans, they may cause s-risks as well. To the extent that averting s-risks is more tractable than ensuring AIs do not want to disempower humans at all (see next section), or one has a comparative advantage in s-risk reduction, option value does not necessarily favor working on extinction risks from AI.
Acknowledgments
Thanks to David Althaus and Lukas Gloor for comments and discussion.
I personally don’t expect (post-)humans will carefully reflect and coordinate to do the best thing even in futures that go fairly well, but that’s more open to discussion. And in any case, it’s not a crux for the option value argument.
The option value argument doesn’t work when it’s most needed
If you’re uncertain whether humanity’s future will be net positive, and therefore whether extinction risk reduction is good, you might reason that we should keep civilization going for now so we can learn more and, in the future, make a better-informed decision about whether to keep it going. After all, if we die out, we can never decide to bring humanity back. But if we continue existing, we can always shut everything down later. Call this the option value argument.
I don’t think this argument is very strong. It is exactly in the worlds where things go very badly that the option value argument doesn’t work. The inhabitants of such dystopian worlds are very unlikely to have the ability and/or motivation to carefully reflect and coordinate to stop existing, even if that would be the best thing to do.[1] If they did, why would these worlds be so dystopian?
That is, continuing to exist will not give us the option to stop existing later when most needed. Humanity won’t have that much control or impartially altruistic motivations in worlds where the most severe s-risks occur. If the future is going very badly, we probably won’t decide to end civilization to prevent it from getting worse. The worst s-risks don’t happen when things are going so well that we can stop to reflect and decide to steer the future in a better direction. Many things have to go just right for humanity to achieve this. But if we’re on track to create s-risks, things are not going right.
The important point here is that we can’t rely on future agents to avoid s-risks by default. Reducing extinction risk doesn’t entail s-risk reduction (especially when we consider which world gets saved). Some resources should go toward preventing worst-case outcomes in advance. To be clear, the takeaway is not that we should consider increasing extinction risk, but rather that we should devote some effort toward increasing the quality of the future conditional on humanity surviving.
Below, I’ll list some quotes explaining this point in more detail.
The expected value of extinction risk reduction is positive by Jan M. Brauner and Friederike M. Grosse-Holz
Brauner and Grosse-Holz discuss “Why the ‘option value argument’ for reducing extinction risk is weak”:
The whole section 1.3: “1.3: Future agents could later decide not to colonize space (option value)” is relevant and worth reading. In particular, the subsection “Only the relative good futures contain option value”:
Cause prioritization for downside-focused value systems by Lukas Gloor
Beginner’s guide to reducing s-risks by Anthony DiGiovanni
Acknowledgments
Thanks to David Althaus and Lukas Gloor for comments and discussion.
I personally don’t expect (post-)humans will carefully reflect and coordinate to do the best thing even in futures that go fairly well, but that’s more open to discussion. And in any case, it’s not a crux for the option value argument.