The most important climate change uncertainty

Short summary: we don’t know how much warming will make civilization collapse, and it might not be unreasonable to think of climate change as a major x-risk on these grounds.

Background/​Epistemic status: I’m a climate scientist who has been in and around the Effective Altruism community for quite a few years and has spent some time thinking about worst-case outcomes. The following content was partially inspired by this forum comment by SammyDMartin; it includes many of the same points, but also takes things further and potentially in some different directions. I claim no certainty (or even high confidence) in anything said here — the main goal is to encourage discussion.

Introduction

Outside of the EA world, discussion of climate change as a top-priority emergency and/​or existential threat is ubiquitous: from politicians to scientists to popular movements and books. EA literature typically takes a different view: climate change is bad but not an existential threat, and there are many other threats that could cause more damage (like unaligned AI or engineered pandemics) and deserve our marginal resources more.

The disconnect between these two views on climate change has bothered me for a long time. Not because I necessarily agree strongly with one side or the other, but because I just haven’t had a good model for why this disconnect exists. I don’t think the answer is quite as simple as “the people who think climate change is a top priority haven’t thought seriously about existential risks and cause prioritization”; anecdotally, I’ve met many people within EA who have, and who still have a nagging feeling that EA is somehow downplaying the risks from climate change. So what is going on?

This post examines two distinct, related, and potentially provocative claims that I think cut to the heart of the disconnect:

  1. most of the subjective existential risk from climate change comes from the uncertainty in how much damage is caused by a given amount of warming

  2. it’s not obviously unreasonable to think the existential risk from climate change is as large as that from AI or pandemics.

The focus is on the importance of climate change as a cause area, not neglectedness or tractability; I’ll revisit this omission a bit at the end. The discussion is also purposely aimed at a longtermist/​existential-risk-prioritizing audience; the importance of climate change under other moral views can and should be discussed, but I will not do so here.

Background

Following, e.g., Halstead, it is instructive to split the question of climate change damages into three numbered questions:[1]

  1. How much greenhouse gas will humanity emit?

  2. How much warming will there be per unit CO-equivalent greenhouse gas?

  3. How much damage will be caused (to humanity) by a given amount of warming?

Question 1 can in principle be addressed by economic and political projections, thinking about the economics of decarbonization, and so on. Question 2 is addressed by climate science: climate modeling, looking at past climate changes, and so on. Question 3 has typically been addressed by economic modeling (more on this later) and by considering hard limits for habitability by humans[2].

Much existing EA analysis of climate change as an existential risk focuses on Question 2[3]. This makes sense in part because it’s the most well-quantified: scientists and the IPCC publish probability distributions of this value, called the “climate sensitivity[4]. Conditional on a given carbon emission scenario, we can then find out the probability of a certain amount of warming by a certain date (e.g. 2100). If you think that some extreme level of warming will cause existential catastrophe (say 6/​10/​14 °C, whichever you fancy[5]), then the existential risk from climate change is simply the probability of exceeding that degree of warming; the sketch below summarizes this. So far, so good.

If we assume that there is a threshold level of warming that will trigger existential catastrophe, the probability of this happening (i.e. the existential risk) is the red shaded area.

The distinction between existential catastrophe and human extinction is important; the former is a destruction of humanity’s long-term potential, while the latter involves the death of every single human being. Extinction would obviously destroy humanity’s potential, but so too would an unrecoverable collapse of global civilization. The existential risk from climate change includes both the risk that climate change kills all humans and the risk that climate change triggers a chain of events (involving war, famine, mass migration, etc.) leading to unrecoverable collapse.

Claim 1: uncertainty in damages is the main source of existential risk

The first claim is the following. Most of the subjective existential risk from climate change comes from the uncertainty about how much damage will be caused by a given amount of warming (Question 3). In other words, it comes more from the uncertainty about what warming level is existential, rather than the uncertainty about how much warming there will be.

Why to believe this

Let’s start with a thought experiment. For simplicity, assume that global warming by 2100 has a 1% likelihood of being above 6 °C and a 50% likelihood of being above 3 °C (these numbers are reasonable, see e.g. here). Let’s say, just for the sake of argument, that climate change will trigger unrecoverable collapse if and only if global warming exceeds 6 °C. Then the existential risk from climate change is 1%, and it arises entirely due to our uncertainty in the warming per unit CO (Question 2).

But if we’re not confident about how much it takes to induce unrecoverable collapse, something interesting can happen. Let’s say we assign a 2% chance to the possibility that 3 °C of warming is sufficient. Now, the subjective risk of climate-change-induced existential catastrophe approximately doubles (1% + [50% * 2%] = 2%)[6], and this increase is entirely due to our uncertainty in the damages caused by warming (Question 3).

The same as above, but now we’re unsure about how much warming will trigger existential catastrophe. Even a small chance of a lower threshold (blue) would substantially affect the total subjective risk.

This effect doesn’t depend on how optimistic or pessimistic you are with respect to humanity’s resilience to warming. Let’s say you first think that existential catastrophe will only occur somewhere very far into the tail of extreme warming, and that there is a 110,000 chance that warming reaches this level. Now, even a 210,000 belief that 3 °C warming could trigger unrecoverable collapse is still enough to double the total existential risk.

So the question then becomes: how well can we constrain where such a collapse threshold might be? We could consider this as a probability distribution, similar to how we think about how much warming there will be per unit CO (climate sensitivity; Question 2). Purely schematically, this could look something like this:

A probability distribution for where the threshold level of warming for collapse could be, analogous to distributions of the warming per unit CO. The shape of the distribution is purely schematic.

But it’s worth noting that this is really different from understanding the climate sensitivity: there we can get nice neat probability distributions because the climate system obeys physical laws that we (mostly) understand very well[7], and because we can draw on the wealth of data about how CO and climate change covaried in the deep past[8]. On the other hand, the dynamics of 8 billion people interacting through a tangled web of social systems are not described by a manageable set of equations, and by necessity there are no past examples of climate change leading to global collapse that we can use to calibrate our models. I’ll get a little more into the details later, but it seems to me that a priori we should be very skeptical of our current ability to constrain such a collapse threshold anywhere near as well as the climate sensitivity. Therefore, the latter uncertainty (i.e. the damages) is probably where much of the existential risk comes from (subjectively speaking).

The argument here is not yet about what kinds of probabilities it’s actually reasonable to assign to existential catastrophe at moderate levels of warming. It’s just arguing that the constraints on this are weak enough — much weaker than those on climate sensitivity — that this is the uncertainty that dominates in calculations of the existential risk. If this is true, one interesting corollary would be that further constraining the climate sensitivity may not be of much use in constraining the extent to which we should believe that climate change poses an existential threat.

How this might be wrong

One way this could be wrong is if the story about some threshold level of warming for existential catastrophe (unrecoverable collapse and/​or human extinction) is fundamentally inapplicable here. For example, perhaps there are no such thresholds, or there are only thresholds for collapse in general and it’s something else that determines whether or not civilization can recover or not. I agree that things are probably not this simple, but right now the threshold story seems to me a useful enough approximation of the real world to try to learn something from it.

Another way this could be wrong is if it’s somehow unreasonable to put non-negligible probabilities on collapse at moderate levels of warming, for example if the existing evidence/​literature rules this out with sufficient certainty. I’ll discuss this further in the next section.

Claim 2: Climate change versus other existential risks

The second claim might seem especially provocative to many readers, but please bear with me.

It is not obviously unreasonable to think that the existential risk from climate change is as large (i.e. of a similar order of magnitude) as the risk from AI or pandemics.

To be clear, the argument is not that this view is correct, just that it is not obviously unreasonable; the distinction will become important. I’ve also purposely made this claim a very strong one: even if you don’t end up agreeing, consider as we go whether you might agree with a weaker version[9].

Why to believe this

Let’s look at some numbers. The Precipice estimates the existential risk[10] from AI at 1 in 10, from engineered pandemics at 1 in 30, and from climate change at 1 in 1000. 80,000 Hours goes even lower for climate change, suggesting that its total contribution to existential risk is “something like 1 in 10,000”.

The “standard approach” of assuming that climate change can only cause existential catastrophe at extreme levels of warming — say much higher than 6 °C — straightforwardly gives probabilities consistent with these estimates. For example, the chance of 6 °C warming by 2100 might be on the order of 1100[11], and the probability of higher levels of warming is much smaller. If that’s true, it’s clearly unreasonable to argue based on the uncertainty in warming that existential risk from climate change is of order 110.

But what about arguing for a large existential risk from climate change based on uncertainty in how much it takes to induce existential catastrophe? As a partial example of such a view, we can use Mark Lynas’s appearance on the 80,000 Hours podcast, in which he suggests that

“[global civilizational collapse has] a 30 to 40% chance of happening at three degrees, and a 60% chance of happening at four degrees, and 90% at five degrees, and 97% at six degrees.”.

Now, the 50% chance of warming reaching 3 °C cited in the thought experiment is approximately reasonable[12], and together with a 30% chance of collapse occurring at 3 °C we would have a risk of climate-change-induced collapse of at least 110 (0.5*0.3=0.15)[13]. If this collapse is unrecoverable, or even just somewhat likely to be so, the existential risk from climate change would be of the same order — as high as that from AI or pandemics.

How reasonable is it to assign these kinds of probabilities to collapse at moderate levels of warming? (I’ll address the issue of recovery from collapse later.) Many in the EA community seem to disagree[14]. Lynas’s book (Our Final Warning) has also been criticized on the EA Forum for potential misinterpretation of evidence. Given all of this, it may be tempting to just dismiss this view out of hand; however, let’s examine this more carefully.

For convenience, let’s use p to denote the probability that 3 °C of global warming can induce global civilizational collapse. What would a good justification for a certain estimate of p actually look like? As I noted earlier, understanding the damage caused by a given amount of warming is much harder — and a fundamentally different problem — than understanding the warming for a given amount of CO. We can only come up with well-constrained probability distributions for the latter because the underlying physics is well understood (i.e. can mostly be described using relatively simple equations) and we have applicable empirical records of past climate change. But this isn’t the case for the former problem.

What about the economic models of climate change impacts? I don’t claim to be much of an expert on this subject, but you don’t need to be one to be skeptical of how useful these models are for quantifying risks of global collapse (even if they are very useful for other purposes). These models rarely even attempt[15] to consider wars, mass migration, famine, or any of the other processes that would probably be fundamental to a climate-change induced collapse. A model that is designed so as not to consider process X is irrelevant for quantifying the likelihood of X[16]. Imagine if someone told you “I know that this asteroid isn’t going to hit Earth, because I studied its trajectory in a simulation that doesn’t allow for the possibility of asteroids hitting Earth”.

The purpose of pointing out these weaknesses is not to argue for or against any particular value of p. The purpose is to suggest that, given these weaknesses, people’s estimates of p are likely substantially (and perhaps dominantly) affected by their intuitions. I don’t mean that people are “just” using their intuitions — one can have detailed discussions about specific causal pathways (war, famine, etc.) and their likelihoods — but that, in the absence of models anywhere near as objective as those used to understand climate sensitivity, estimates of p on this basis will still end up being strongly coloured by people’s intuitions.

Some people’s estimates of p are very low (this seems to include most EAs that have written on the topic), and some people’s estimates — like Mark Lynas’s — are very high (I’m probably somewhere in the middle). But the key point, if they are indeed mostly based on intuition, is that it’s not immediately clear that any of these views is objectively more justified. While many within EA may favor low values of p (and if true, it could be quite interesting to ask why), this means that a pessimistic estimate (even of the “p is order 10%” magnitude) is not obviously unreasonable.

Finally, let’s briefly consider the probability that a global collapse is unrecoverable — call this p. If the 30% probability of collapse considered initially refers only to collapse more generally, our calculation of existential risk is not complete without including p. But estimating this probability seems at least as hard as estimating the probability of collapse in the first place — in which case the above argument applies again! In other words, estimates of p also lack good objective constraints, and people’s estimates would likely be strongly driven by their intuitions. With a 50% chance of 3 °C warming and a 30% chance of collapse at 3 °C, a value of p = 0.5 will still yield an existential risk of order 110 (0.5*0.3*0.5=0.075). You would need great confidence in a small value of p to be able to dismiss climate change as an existential risk on this particular basis.

How this might be wrong

Right now I can think of a few major ways in which this might be wrong. The first and most obvious one is if the case for low values of p or p can actually be made much more rigorously and conclusively than I have presented it here. In other words, if it can somehow be made clear that civilization is extremely unlikely to collapse at moderate levels of warming and/​or is extremely likely to recover from such a collapse. If this is true, my only response is that I’d love to see this!

Next, perhaps I’m demanding too much of “models” in assessing the probabilities of certain outcomes. Among contributors to existential risk, the warming per unit CO is very much an outlier in terms of how easy it is to constrain objectively. Compared to AI and pandemics, the fact that we have economic models at all might imply that we should be less uncertain about how damaging climate change will be relative to those risks. But again, the economic models are still very obviously flawed in terms of understanding existential risk, and I’d love to see much more detailed discussion on this until we have some clarity on what reasonable values of p are.

Related to this, maybe there’s some reason why it’s actually fine to rely on low values of p or p largely from intuition. I’m quite skeptical of this, but open to being convinced.

Finally, there’s a way Claim 2 could be wrong even if human civilization is actually as fragile as the pessimistic perspective suggests. In that case, the existential risk from AI and pandemics could also be much higher than the estimates of 110 and 130 quoted above (although clearly there’s a limit of 1), and so there could still be a huge difference in importance between these risks and climate change. If this is right, I think there’d still be something interesting here: the fragility itself would seem to be by far the dominant contributor to total existential risk, and we should think of ways to do something about that.

What now?

The motivation I gave at the beginning of this post was to understand the disconnect between climate optimists and pessimists, especially within EA. To help do this, I presented and examined two claims: that the majority of the existential risk from climate change is due to uncertainty in the damages caused by warming; and that it is not obviously unreasonable to think that the existential risk from climate change is similar to that from AI and pandemics.

A key point is that understanding how likely moderate levels of global warming are to cause global collapse is really hard; the same applies to how likely civilization is to recover from such a collapse. Different intuitions on this can end up leading to vast differences in the estimated existential risk from climate change: from the 1/​1000 − 1/​10000 risks given in past EA assessments, to something perhaps as large as 110 in the most pessimistic case. For estimates that rely a lot on intuition, it’s hard to assess objectively which position is more correct.

Of course importance is not all that matters; I did not consider neglectedness and tractability. In the case of climate change, part of the reason that it’s relatively less prioritized than AI and pandemics is that it’s considered much less neglected (see 80,000 Hours). This certainly seems reasonable. But it’s also worth noting that in this post we’ve been talking about disagreements about the importance of climate change that span many orders of magnitude, and this could have a substantial effect on relative prioritization regardless of neglectedness and tractability. Furthermore, a new understanding of where the key uncertainties/​sources of subjective risk are might open up new opportunities for impact.

Ultimately, regardless of how much you agree with any of this, I hope this post stimulates some useful discussion! I’m interested to hear all of your comments.

Acknowledgments

Helpful comments and discussions were provided by: Emily, Gatlen, Goodwin, Juan, Mira, Sarthak, Xuan. All mistakes are my own.

  1. ^

    This reduces damages from anthropogenic Earth system change to global warming only, which is far from ideal but sufficient for the purposes of this post.

  2. ^

    One classic example of this is the temperature beyond which humans would die of heat stress (see for example Sherwood and Huber 2010: An adaptability limit to climate change due to heat stress). Reaching this limit globally would certainly be very bad for humanity; however, this probably requires warming on the order of 10 °C or greater. This post will focus on the damages that can occur at much lower levels of warming, and so I won’t discuss this again here.

  3. ^

    See Halstead, 80000 Hours, and The Precipice (Ord, 2020, pages 102-113)

  4. ^

    There are nuances regarding definitions, but they’re not really relevant for our purposes.

  5. ^

    There is certainly a clear upper limit somewhere: if it’s not the heat stress mentioned above, then it’ll be the runaway greenhouse. However, both lie quite far in the long tail of warming (i.e. are quite unlikely).

  6. ^

    There’s a very small amount of double counting going on here, but I neglect it for simplicity. I also haven’t considered P(existential catastrophe) as a continuous function of warming: this would probably just make the effect larger anyway.

  7. ^

    Fluid dynamics, radiation, etc.

  8. ^
  9. ^

    For example: “It is not obviously unreasonable to think that the existential risk from climate change is much larger than current mainstream EA evaluations, even if it’s not quite on the level of AI or pandemics”

  10. ^

    Strictly speaking, the risk of existential catastrophe in the next 100 years.

  11. ^

    See discussion here.

  12. ^

    Again, compare to discussion here, which includes estimates from the IPCC Sixth Assessment Report.

  13. ^

    This even neglects the probabilities he assigns for collapse at warming beyond 3 °C.

  14. ^

    See the literature already discussed (The Precipice, 80,000 Hours profile, the podcast with Mark Lynas, etc.)

  15. ^

    I am not personally aware of any climate change impact models that do (especially widely used ones, which is what matters), but I am not an expert on this and so I might just not have heard of them.

  16. ^

    In fact, this paper shows that the ubiquitous climate economics model DICE is incapable of generating an economic collapse with any (!) level of climate-induced damages. Sound reasonable?