Existential risk mitigation: What I worry about when there are only bad options

  • (This is a Draft Amnesty Day draft. That means it’s not polished, it’s probably not up to my standards, the ideas are not thought out, and I haven’t checked everything. I was encouraged to post something).

  • (Written in my personal capacity, reflecting only my own, underdeveloped views).

  • (Commenting and feedback guidelines: I’m going with the default — please be nice. But constructive feedback is appreciated; please let me know what you think is wrong. Feedback on the structure of the argument is also appreciated)


My status: doubt. Shallow ethical speculation, including attempts to consider different ethical perspectives on these questions that are both closer to and further from my own.


If I had my way: great qualities for existential risk reduction options

We know what we would like the perfect response to an existential risk to look like. If we could wave a wand, it would be great to have some ideal strategy that manages to simultaneously be:

  • functionally ideal:

    • effective (significantly reduces the risks if successful, ideally permanently),

    • reliable (high chance of success),

    • technically feasible,

    • politically viable,

    • low-cost;

    • safe (little to no downside risk—ie graceful failure),

    • robust (effective, reliable, feasible, viable and safe across many possible future scenarios),

    • [...]

  • ethically ideal

    • pluralistically ethical (no serious moral costs or rights violations entailed by intervention, under a wide variety of moral views),

    • impartial (everyone is saved by its success; no one bears disproportionate costs of implementing the strategy) /​ ‘paretotopian’ (everyone is left better off, or at least no one is made badly worse off);

    • widely accepted (everyone (?) agrees to the strategy’s deployment, either in active practice (e.g. after open democratic deliberation or participation), passive practice (e.g. everyone has been notified or informed about the strategy), or at least in principle (we cannot come up with objections from any extant political or ethical positions, after extensive red-teaming)),

    • choice-preserving (does not lead to value lock-in and/​or entail leaving a strong ethical fingerprint on the future)

    • [...]

  • etc, etc.

But it may be tragically likely that interventions that combine every single one of these traits are just not on the table. To be clear, I think many proposed strategies for reducing existential risk at least aim at hitting many or all of these criteria. But these won’t be the only actions that will be pursued around extreme risks.

What if the only feasible strategies to respond to existential risks—or the strategies that will most likely be pursued by other actors in response to existential risk—are all, to some extent, imperfect, flawed or ‘bad’?

Three ‘bad’ options and their moral dilemmas

In particular, I worry about at least three (possible or likely) classes of strategies that could be considered in response to existential risks or global catastrophes: (1) non-universal escape hatches or partial shields; (2) unilateral high-risk solutions; (3) strongly politically or ethically partisan solutions.

All three plausibly constitute ‘(somewhat) bad’ options. I don’t want to say that these strategies should not be pursued (e.g. they may still be ‘least-bad’, given their likely alternatives; or ‘acceptably bad’, given an evaluation of the likely benefits versus costs). I also don’t want to claim that we should not analyze these strategies (especially if they are likely to be adopted by some people in the world).

But I do believe that all create moral dilemmas or tradeoffs that I am uncomfortable with—and risky ‘failures’ that could be entailed by taking one or another view on whether to use them. The existential risk community is not alone in facing these dilemmas—many other communities, movements, or actors that are interested in- or linked to existential risks face these dilemmas, whether they realize it or not. But these dilemmas do keep me up at night, and we might wrestle with them more.

Non-universal escape hatches & partial shields

Many existential risks (e.g. AI risk) are relatively all-or-nothing: failure would make any defense or holdout implausible. No one survives.

Others however (e.g. nuclear war, extreme biorisks) are or appear potentially shield-able or escapable. People have proposed various strategies or avenues—from island refuges to submarines to bunkers or off-planet refuges—by which we could ensure that at least some people survive a catastrophe, if outright mitigation efforts fail.

In this case, I worry about two converse ethical risks:

Existential desertion/​escapism

  • Existential desertion /​ escapism: taking an escape route from an existential risk, while leaving many others behind to suffer it; or implementing a defensive policy that only shields a small part of today’s world from the risk:

    • (weak) escape hatch examples: common public debates over “preppers”, ‘elite survivalists’, -of ‘abandon earth’ space programs, etc.

    • (weak) partially protective shields examples: climate change/​biorisk plans that settle for minimizing impact on /​ exposure of wealthy nations; vaccine hoarding; etc.

    • This position seems most ethically challenging when:

      • the escape hatch is out of reach for many who want it (e.g. it’s costly and requires many resources);

      • your escape hatch or partial shield reroutes significant resources which would have been critical for the collective mitigation effort;

      • your visible defection erodes common trust and morale (signaling distrust in the viability of joint efforts), and derails any coalitional projects to pursue or implement collective mitigation responses;

      • you are disproportionately responsible for generating the existential risk in the first place, or you could otherwise take significant steps to mitigate it (e.g. nuclear state leader’s bunkers);

      • you are a really bad person; and you have such tools of value lock-in available to you that it is very likely that the continued survival of your descendant line seems net-bad;

      • your ethical framework focuses on the risk’s impacts on today’s world and/​or people over future people;

    • This position seems less ethically challenging when:

      • the rest of the world still does not take seriously the imminent risk you’re concerned about in spite of your repeated attempts to warn them;

      • the resources used for the local escape hatch are minor or non-fungible;

      • (under some moral views:) the people taking the escape hatch are less responsible for generating or sustaining the risk in the first place;

      • you sponsor the escape hatch for other people, or set up random escape hatch lotteries;

      • you adopts a total utilitarian or longtermist perspective where there is just a staggering premium on survival by anyone;

Existential conscription

  • existential conscription: (i.e. ‘the existential crab bucket’): reverse of existential desertion /​ escapism: refusing to allow some people to take available-to-them escape- or protection strategies (that would ensure human survival even if collective risk mitigation projects fail to work). For instance, because you want to ensure everyone directs their energies/​resources at the collective project of ensuring everyone alive is saved (even if at much lower odds), and/​or you disapprove of people going it alone and free-riding;

    • examples: objections to (existential-risk-motivated) space colonization on grounds of the expected ‘moral hazard’ of abandoning the earth; (fictional example: ‘Escapism’ and space travel as a crime against humanity in Cixin Liu’s The Dark Forest)

    • (very) weak example: early-COVID-19 policies to prevent public from buying face masks to reserve resources for public health workers;

    • This position seems most ethically challenging when:

      • the world hasn’t actually gotten a solid collective risk mitigation program underway to collectively address the risk;

      • mitigation of the existential risk in question isn’t strongly dependent on mass public action (or on the actions of the people aiming to leave)

      • you disapprove of individual actions to collective problems on the basis of political aesthetics;

      • you selectively judge and/​or block escapist attempts primarily on political grounds, etc.

    • This position seems less ethically challenging when:

      • there is strong reason to believe that the collective mitigation of the existential risk is possible, but is a weakest-link problem, one that also faces significant free-rider concerns;

      • successful resolution of the existential risk is strongly dependent on the marginal resources that would be leaked away (either directly, or if it caused a cascading collapse in coalitional trust);

      • ...?

Unilateral high-risk solutions

Some possible solutions to existential risks might have a large risk of (non-existential but significant) catastrophe if their implementation fails or things do not go entirely to plan.

This again creates two linked morally complex positions:

Existential wagering/​roulette

  • existential wagering: (or: ‘existential roulette’?): taking some strategy that might preserve all of humankind from an imminent existential risk, but which risks the lives and/​or welfare of some significant number of people alive if not all goes to plan.

    • In some sense, this is the inverse of something like the ‘nuclear peace’ gambit (which [under charitable interpretations] stakes existential or globally catastrophic downside consequences in order to protect large parts of the world population from the likely threat of frequent great power wars).

    • Examples: Yudkowsky “if you can get a powerful AGI that carries out some pivotal superhuman engineering task, with a less than fifty percent chan[c]e of killing more than one billion people, I’d take it”

    • Uncertain examples:

      • proposals for geoengineering which might pose major ecological risks if there is scientific error, or if deployed without the proper governance framework in place;

      • proposals for extensive societal reorientation or moral revolution that would require historically rare levels of global buy-in and sacrifice (e.g. degrowth; half-earth socialism, …); especially if these approaches misjudge crucial political conditions (e.g. they may not benefit from a gradual global transformation in ecological awareness, but rather face sustained global political opposition and conflict); (low confidence in this analysis);

      • What would not qualify: Andreas Malm’s How To Blow Up A Pipeline, as downside risks of direct actions seem unlikely to be sufficiently catastrophic;

    • This position seems most ethically challenging if:

      • Your moral framework emphasises the prevention of harm (or exposure to risk)

      • the decision to deploy the risky intervention is taken by people very shielded from the harms of its failure (or success);

      • the intervention’s failure modes creates unequal or differential impacts on different (demographic/​political) groups;

      • the interventions’ global impacts aren’t a hypothetical or a risk, but a guaranteed cost;

      • the existential catastrophe is still some time off, so there was no need to spin the wheel quite yet;

      • ...

    • This position seems (slightly) less ethically challenging if:

      • after significant exploration and red-teaming, you’ve not yet identified any closely adjacent other plans that would avoid this risk;

      • you have (extremely) strong reason to expect the disaster is extremely close, and no one has other, better solutions in reserve; (we’re at the game’s final moveset, so it might as well be this);

      • exposure to the intervention’s failure modes is significant but mostly globally random;

      • if exposure to the costs is not random, you’ve at least followed a process to solicit buy-in from the at-risk parties (i.e. their agreement that they are willing to face the risks); and/​or the decision to go ahead is taken by these people

Existential gridlock/​veto

  • existential gridlock /​ veto: (partial) reverse of existential wagering: refusing to allow others to deploy any solutions to imminent existential risks because you perceive they could pose a nonzero risk to some part of the world;

    • example:

      • opposition to geoengineering (that is grounded on downside-risk concerns rather than moral hazard concerns); …

      • ?

    • This position seems most ethically challenging if:

      • There are no other feasible plans for addressing the existential risk;

      • The costs are to your parochial political interests rather than to sacred values, or to sacred values rather than the lives of large populations (uncertain about this, as in some moral systems certain sacred values would take precedence);

      • the intervention is strongly grounded in status quo bias, and doesn’t pass a simple reversal test (e.g. if you lived in a world where the intervention was already deployed, you would argue against its cessation);

      • You impose an impossibly high burden of proof for establishing safety on the intervention’s proponents—i.e. you haven’t specified (to yourself or others) the risk threshold at which you’d accept the deployment—to a level where it is unlikely that any sufficiently thorough intervention will ever be deployed.

    • This position seems (slightly) less ethically challenging if:

      • We have strong reasons to expect that any intervention’s proponents are likely to overestimate or overstate their success chances, and understate the failure mode impacts, of their suggestions; so they and us should be protected from running these risks;

Politically or ethically partisan solutions

Some possible solutions to existential risks might have the trait that, whether deliberately or indeliberately, and whether in means or in ends, they end up favouring or strengthening some political actors, or some ethical frameworks, over others.

Such approaches can give rise to three morally complex responses or risks:

Existential co-option

  • existential co-option: adopting interventions that might help mitigate an existential risk, but which end up empowering certain actors. These actors are not otherwise bound to our values, and they may end up using either the threat of the risk, or the established mitigation policies, in ways that are unrelated to mitigating the risk itself, and rather focused at e.g. social control or value lock-in;

    • Examples: some proposals for surveillance systems;

    • [I got confused & tired here, and hope to work this out some other day]

    • [...]

Existential blackmail

  • existential blackmail: (e.g.: ‘if we can’t have the world, no one can—certainly not our political opponents’); a version of existential veto where you refuse to allow some other party to attempt to solve an existential risk for everyone, unless they do so in a way that fits your preferred means, and do so on terms that ensure the saved world reflects your own ideals or values.

    • Examples:

    • [I got confused & tired here, and hope to work this out some other day]

    • ...

Existential spite

  • existential spite: (i.e. ‘existential I-told-you-so’): passive version of existential blackmail: a lack of interest in-/​motivation for exploring potential solutions to existential risks, because as things stand under the status quo, your political adversaries/​outgroup are likely to get the credit for any solution, and/​or are likely to inherit the resulting world and future, either of which is anathema;

    • Examples: left as an exercise to the reader;

Conclusion

Writing this draft is a bit sobering. My point is not that the above solutions should all be avoided—nor that any of the ‘approaches’ above are all entirely illegitimate responses or views on them. It may not be the case that the correct approach lies in the middle of any of them. However, I feel ultimately that the key point is that none of these positions are easy, obvious, unproblematic, or safe responses, in the face of existential risk. This is something that should trouble and provoke some thought, not just for us in the EA community, but also in the academic existential risk /​ Global Catastrophic Risk community, and for any other political movements that realise their stake in society’s historical trajectory around extreme risks.

I want better solutions. But while we wait, I want better ways of thinking about bad ones.