For other readers that might be similarly confused to me—there’s more in the profile on ‘indirect extinction risks’ and on other longrun effects on humanity’s potential.
Seems a bit odd to me to just post the ‘direct extinction’ bit, as essentially no serious researcher argues that there is a significant chance that climate change could ‘directly’ (and we can debate what that means) cause extinction. However, maybe this view is more widespread amongst the general public (and therefore worth responding to)?
On ‘indirect risk’, I’d be interested in hearing more on these two claims:
“it’s less important to reduce upstream issues that could be making them worse vs trying to fix them directly” (footnote 25); and
“our guess is that [climate change’s ‘indirect’] contribution to other existential risks is at most an order of magnitude higher — so something like 1 in 1,000”—which “still seems more than 10 times less likely to cause extinction than nuclear war or pandemics.”
If people are interested in reading more about climate change as a contributor to GCR, here are two CSER papers from last year (and we have a big one coming out soon)
I think there is good reason to focus on direct extinction given their audience. As they say at the top of their piece, “Across the world, over half of young people believe that, as a result of climate change, humanity is doomed”
What is your response to the argument that because the direct effects of AI, bio and nuclear war are much larger than the effects of climate change, the indirect effects are also likely much larger? To think that climate change has bigger scale than eg bio, you would have to think that even though climate’s direct effects are smaller, its indirect effects are large enough to outweigh the direct effects. But the direct effects of biorisk seem huge. If there is genuinely democratisation of bio WMDs, then you get regular cessation of trade and travel, there would need to be lots of surveillance, would everyone have to live in a biobubble? etc. The indirect effects of climate change that people talk about in the literature stem from agricultural disruption in low income countries leading to increased intrastate conflicts in low income countries (though the strength/existence of the causal connection is disputed). While these indirect effects are bad, they are orders of magnitude less severe than the indirect effects of biorisk. I think similar comments apply to nuclear war and to AI.
The papers you have linked to suggest that the main pathway through which climate change might destabilise society is via damaging agriculture. All of the studies I have ever read suggest that the effects of climate change on food production will be outpaced by technological change and that food production will increase. For example, the chart below shows per capita food consumption on different socioeconomic assumptions and on different emissions pathways for 2.5 degrees of warming by 2050 (for reference 2.5 degrees by 2100 is widely now thought to be business as usual). Average per capita food consumption increases relative to today on all socioeconomic pathways considered
Source: Michiel van Dijk et al., ‘A Meta-Analysis of Projected Global Food Demand and Population at Risk of Hunger for the Period 2010–2050’, Nature Food 2, no. 7 (July 2021): 494–501, https://doi.org/10.1038/s43016-021-00322-9 .
Note that “humanity is doomed” is not the same as ‘direct extinction’, as there are many other ways for us to waste our potential.
I think its an interesting argument, but I’m unsure that we can get to a rigorous, defensible distinction between ‘direct’ and ‘indirect’ risks. I’m also unsure how this framework fits with the “risk/risk factor” framework, or the ‘hazard/vulnerability/exposure’ framework that’s common across disaster risk reduction, business + govt planning, etc. I’d be interested in hearing more in favour of this view, and in favour of the 2 claims I picked out above.
We’ve talked about this before, but in general I’ve got such uncertainty about the state of our knowledge and the future of the world that I incline towards grouping together nuclear, bio and climate as being in roughly the same scale/importance ‘tier’ and then spending most of our focus seeing if any particular research strand or intervention is neglected and solvable (e.g. your work flagging something underexplored like cement).
On your food production point, as I understand it the issue is more shocks than averages. Food system shocks that can lead to “economic shocks, socio-political instability as well as starvation, migration and conflict” (from the ‘causal loop diagram’ paper). However, I’m not a food systems expert, I’d suggest the best people to discuss this with more are our Catherine Richards and Asaf Tzachor, authors of e.g. Future Foods For Risk-Resilient Diets.
I’m not sure I understand why you don’t think the in/direct distinction is useful.
I have worked on climate risk for many years and I genuinely don’t understand how one could think it is in the same ballpark as AI, biorisk or nuclear risk. This is especially true now that the risk of >6 degrees seems to be negligible. If I read about biorisk, I can immediately see the argument for how it could kill more than 50% of the population in the next 10-20 years. With climate change, for all the literature I have read, I just don’t understand how one could think that.
You seem to think the world is extremely sensitive to what the evidence suggests will be agricultural disturbances that we live through all the time: the shocks are well within the normal range of shocks that we might expect to see in any decade, for instance. This chart shows the variation in the food price index. Between 2004 and 2011, it increased by about 200%. This is much much bigger than any posited effects of climate change that I have seen. One could also draw lots of causal arrows from this to various GCRs. Yet, I don’t see many EAs argue for working on whatever were the drivers of these changes in food prices.
I have worked on climate risk for many years and I genuinely don’t understand how one could think it is in the same ballpark as AI, biorisk or nuclear risk
Note that the OP did not include “AI” in their list of risks that they think of as the same tier as climate risk.
Many thanks to you & the others in the comments for the insightful discussion. Could you clarify a few points:
You state that 2.5 degrees warming by 2100 is widely accepted as the likely outcome of ‘business as usual’ - does this correspond to one of the IPCC scenarios?
You state that >6 degrees warming by 2100 is highly unlikely (risk seems ‘negligible’). Again, is this conclusion drawn from the IPCC report
If you have any additional resources to back these statements up I would love to read them—thanks!
I think all effects in practice are indirect, but “direct” can be used to mean a causal effect about which we have direct evidence, i.e. we made observations about the cause on the outcome without need for discussing intermediate outcomes, not from piecing multiple steps of causal effects together in a chain. The longer the causal chain, the more likely there are to be effects in the opposite direction along parallel chains. Furthermore, we should generally be skeptical of any causal claim, so the longer the causal chain, the more claims of which we should be skeptical, and the weaker we should expect the overall effect.
Strongly agree with Haydn here on the critique. Indeed, focusing primarily on direct risks and ignoring the indirect risks or, worse, making a claim about the size of the indirect risks that has no basis in anything but stating it confidently really seems unfortunate, as it feels like a strawman.
Justification for low indirect risk from the article: ”That said, we still think this risk is relatively low. If climate change poses something like a 1 in 10,000 risk of extinction by itself, our guess is that its contribution to other existential risks is at most an order of magnitude higher — so something like 1 in 1,000.25“
And then footnote 25 ”How should you think about indirect risk factors? One heuristic is that it’s more important to work on indirect risk factors when they seem to be worsening many more direct problems at once and in different ways. By analogy, imagine you’re in a company and many of your revenue streams are failing for seemingly different reasons. Could it be your company culture making things less likely to work smoothly? It might be most efficient to address that rather than the many different revenue problems, even though it’s upstream and therefore less direct.
But this doesn’t seem to be the case with climate change and direct extinction risks — there aren’t many different ways for humanity to go extinct, at least as far as we can tell. So it’s less important to reduce upstream issues that could be making them worse vs trying to fix them directly. This means that if you’re chiefly worried about how climate change might increase the chance of a catastrophic global pandemic, it seems sensible to focus directly on how we prevent catastrophic global pandemics, or perhaps the intersection of the two issues, vs focusing primarily on climate change.↩”
Problems with this: 1. There is no basis on which to infer to a magnitude relationship between direct existential risk of climate and indirect existential risk of climate. It is totally possible for climate to have a very low probability of being a direct existential risk while still being a significant indirect existential risk factor, as far as I can see there is no substantive argument for the two to not vary by more than 1 order of magnitude.
2. There is equally no basis, as far as I can tell, for working directly on the problem. What the heuristic impliclty assumes to be true, without justification as far as I can tell, is that problems are solvable by direct work. This is not necessarily true, for example it is perfectly conceivable that AI safety outcomes depend entirely on the state of geopolitics in 2035 and that direct domain-specific work has no effect at all. This is an extreme example, but there seems to be no argument why we should assume a specific and definitely higher share of solvability from direct work.
I don’t think the post ignores indirect risks. It says “For more, including the importance of indirect impacts of climate change, and our climate change career recommendations, see the full profile.”
As I understand the argument from indirect risk, the claim is that climate change is a very large and important stressor of great power war, nuclear war, biorisk and AI. Firstly, I have never seen anyone argue that the best way to reduce biorisk or AI is to work on climate change.
Secondly, climate change is not an important determinant of Great Power War, according to all theories of Great Power War. The Great Power Wars that EAs most worry about are between US and China and US and Russia. The main posited drivers of these conflicts are one power surpassing the other in geopolitical status (the Thucydides trap); defence agreements made over contested territories like Ukraine and Taiwain; and accidental launches of nuclear weapons due to wrongly perceived first strike. It’s hard to see how climate change is an important driver of any of these mechanisms.
I think it’s important to see the nuance of the disagreement here.
1. My critique is of what strikes me as overconfident and overconfidently stated reasoning on what seems a critical point in the overall prioritization of climate—as Haydn writes, few sophisticated people buy the “climate is a direct extinction risk”, so while this is a good hook it is not where the steelmanned case for climate concern is and, whatever one assumes the exact amount of risk to be, indirect existential risk plausibly is the majority of badness from climate from a longtermist lens.
2. My critique does not imply and I have never said that we should work on climate change to address biorisk. The reasoning of the article can be poor and this can be critiqued while the conclusion might still be roughly right.
3. That said, work on existential risk factor is quite under-developed methodologically so I would not update much from what has been said on that so far, I think this is what footnote 25 also shows, the mental model on indirect risks is not very useful / imposes a particular simplified problem structure which might be importantly wrong.
4. As you know, I broadly agree with you that a lot of the climate impacts literature is overly alarmist, but I still think you seem too confident on indirect risks, there are many ways in which climate could be quite bad as a risk factor, e.g. perceived climate injustice could matter for bio-terrorism, or there could be geopolitical destabilization and knock-on effects in relevant regions such as South Asia.
I agree it is not where the action is but given that large sections of the public think we are going to die in the next few decades from climate change, it makes lots of sense to discuss it. And, the piece makes a novel contribution on that question, which is an update from previous EA wisdom.
I took it that the claim in the discussed footnote is that working on climate is not the best way to tackled pandemics, which I think we agree is true.
I agree that it is a risk factor in the sense that it is socially costly. But so are many things. Inadequate pricing of water is a risk factor. Sri Lanka’s decision to ban chemical fertiliser is a risk factor. Indian nationalism is a risk factor. etc. In general, bad economic policies are risk factors. The question is: is the risk factor big enough to change the priority cause ranking for EAs? I really struggle to see how it is. Like, it is true that perceived climate injustice in South Asia could matter for bioterrorism but this is very very far down the list of levers on biorisk.
Pretty sure jackva is responding to the linked article, not just this post, as e.g. they quote footnote 25 in full.
On first point, I think that that kind of argument could be found in Jonathan B. Wiener’s work on “‘risk-superior moves’—better options that reduce multiple risks in concert.” See e.g.
On the second point, what about climate change in India-Pakistan? e.g. an event worse than the current terrible heatwave—heat stress and agriculture/economic shock leads to migration, instability, rise in tension and accidental use of nuclear weapons. The recent modelling papers indicate that would lead to ‘nuclear autumn’ and probably be a global catastrophe.
(In that case, he said that the post ignores indirect risks, which isn’t true.)
On your first point, my claim was “I have never seen anyone argue that the best way to reduce biorisk or AI is to work on climate change”. The papers you shared also do not make this argument. I’m not saying that it is conceptually impossible for working on one risk to be the best way to work on another risk. Obviously, it is possible. I am just saying it is not substantively true about climate on the one hand, and AI and bio on the other. To me, it is clearly absurd to hold that the best way to work on these problems is by working on climate change.
On your second point, I agree that climate change could be a stressor of some conflict risks in the same way that anything that is socially bad can be a stressor of conflict risks. For example, inadequate pricing of water is also a stressor of India-Pakistan conflict risk for the same reason. But this still does not show that it is literally the best possible way to reduce the risk of that conflict. It would be very surprising if it were since there is no evidence in the literature of climate change causing interstate warfare. Also, even the path from India-Pakistan conflict to long-run disaster seems extremely indirect, and permanent collapse or something like that seems extremely unlikely.
Haydn, would you be able to quantify the probability that, in your assessment, climate change will indirectly cause human extinction this century, relative to biorisk? Benjamin Hilton speculates that it’s less than 0.1x, but it’s not clear to me whether you disagree with this estimate (assuming you do) because you think it’s closer to 0.3x, 1x, or 3x. Having more clarity on this would help me understand this discussion better, I think.
For other readers that might be similarly confused to me—there’s more in the profile on ‘indirect extinction risks’ and on other longrun effects on humanity’s potential.
Seems a bit odd to me to just post the ‘direct extinction’ bit, as essentially no serious researcher argues that there is a significant chance that climate change could ‘directly’ (and we can debate what that means) cause extinction. However, maybe this view is more widespread amongst the general public (and therefore worth responding to)?
On ‘indirect risk’, I’d be interested in hearing more on these two claims:
“it’s less important to reduce upstream issues that could be making them worse vs trying to fix them directly” (footnote 25); and
“our guess is that [climate change’s ‘indirect’] contribution to other existential risks is at most an order of magnitude higher — so something like 1 in 1,000”—which “still seems more than 10 times less likely to cause extinction than nuclear war or pandemics.”
If people are interested in reading more about climate change as a contributor to GCR, here are two CSER papers from last year (and we have a big one coming out soon)
Re-framing the threat of global warming: an empirical causal loop diagram of climate change, food insecurity and societal collapse
Assessing Climate Change’s Contribution to Global Catastrophic Risk
I think there is good reason to focus on direct extinction given their audience. As they say at the top of their piece, “Across the world, over half of young people believe that, as a result of climate change, humanity is doomed”
What is your response to the argument that because the direct effects of AI, bio and nuclear war are much larger than the effects of climate change, the indirect effects are also likely much larger? To think that climate change has bigger scale than eg bio, you would have to think that even though climate’s direct effects are smaller, its indirect effects are large enough to outweigh the direct effects. But the direct effects of biorisk seem huge. If there is genuinely democratisation of bio WMDs, then you get regular cessation of trade and travel, there would need to be lots of surveillance, would everyone have to live in a biobubble? etc. The indirect effects of climate change that people talk about in the literature stem from agricultural disruption in low income countries leading to increased intrastate conflicts in low income countries (though the strength/existence of the causal connection is disputed). While these indirect effects are bad, they are orders of magnitude less severe than the indirect effects of biorisk. I think similar comments apply to nuclear war and to AI.
The papers you have linked to suggest that the main pathway through which climate change might destabilise society is via damaging agriculture. All of the studies I have ever read suggest that the effects of climate change on food production will be outpaced by technological change and that food production will increase. For example, the chart below shows per capita food consumption on different socioeconomic assumptions and on different emissions pathways for 2.5 degrees of warming by 2050 (for reference 2.5 degrees by 2100 is widely now thought to be business as usual). Average per capita food consumption increases relative to today on all socioeconomic pathways considered
Source: Michiel van Dijk et al., ‘A Meta-Analysis of Projected Global Food Demand and Population at Risk of Hunger for the Period 2010–2050’, Nature Food 2, no. 7 (July 2021): 494–501, https://doi.org/10.1038/s43016-021-00322-9 .
Note that “humanity is doomed” is not the same as ‘direct extinction’, as there are many other ways for us to waste our potential.
I think its an interesting argument, but I’m unsure that we can get to a rigorous, defensible distinction between ‘direct’ and ‘indirect’ risks. I’m also unsure how this framework fits with the “risk/risk factor” framework, or the ‘hazard/vulnerability/exposure’ framework that’s common across disaster risk reduction, business + govt planning, etc. I’d be interested in hearing more in favour of this view, and in favour of the 2 claims I picked out above.
We’ve talked about this before, but in general I’ve got such uncertainty about the state of our knowledge and the future of the world that I incline towards grouping together nuclear, bio and climate as being in roughly the same scale/importance ‘tier’ and then spending most of our focus seeing if any particular research strand or intervention is neglected and solvable (e.g. your work flagging something underexplored like cement).
On your food production point, as I understand it the issue is more shocks than averages. Food system shocks that can lead to “economic shocks, socio-political instability as well as starvation, migration and conflict” (from the ‘causal loop diagram’ paper). However, I’m not a food systems expert, I’d suggest the best people to discuss this with more are our Catherine Richards and Asaf Tzachor, authors of e.g. Future Foods For Risk-Resilient Diets.
I’m not sure I understand why you don’t think the in/direct distinction is useful.
I have worked on climate risk for many years and I genuinely don’t understand how one could think it is in the same ballpark as AI, biorisk or nuclear risk. This is especially true now that the risk of >6 degrees seems to be negligible. If I read about biorisk, I can immediately see the argument for how it could kill more than 50% of the population in the next 10-20 years. With climate change, for all the literature I have read, I just don’t understand how one could think that.
You seem to think the world is extremely sensitive to what the evidence suggests will be agricultural disturbances that we live through all the time: the shocks are well within the normal range of shocks that we might expect to see in any decade, for instance. This chart shows the variation in the food price index. Between 2004 and 2011, it increased by about 200%. This is much much bigger than any posited effects of climate change that I have seen. One could also draw lots of causal arrows from this to various GCRs. Yet, I don’t see many EAs argue for working on whatever were the drivers of these changes in food prices.
Note that the OP did not include “AI” in their list of risks that they think of as the same tier as climate risk.
Hi John,
Many thanks to you & the others in the comments for the insightful discussion. Could you clarify a few points:
You state that 2.5 degrees warming by 2100 is widely accepted as the likely outcome of ‘business as usual’ - does this correspond to one of the IPCC scenarios?
You state that >6 degrees warming by 2100 is highly unlikely (risk seems ‘negligible’). Again, is this conclusion drawn from the IPCC report
If you have any additional resources to back these statements up I would love to read them—thanks!
I think you’ll find answers to those questions in section 1 of John and Johannes’s recent post on climate projections. IIRC the answers are yes, and those numbers correspond to RCP4.5.
I think all effects in practice are indirect, but “direct” can be used to mean a causal effect about which we have direct evidence, i.e. we made observations about the cause on the outcome without need for discussing intermediate outcomes, not from piecing multiple steps of causal effects together in a chain. The longer the causal chain, the more likely there are to be effects in the opposite direction along parallel chains. Furthermore, we should generally be skeptical of any causal claim, so the longer the causal chain, the more claims of which we should be skeptical, and the weaker we should expect the overall effect.
Strongly agree with Haydn here on the critique. Indeed, focusing primarily on direct risks and ignoring the indirect risks or, worse, making a claim about the size of the indirect risks that has no basis in anything but stating it confidently really seems unfortunate, as it feels like a strawman.
Justification for low indirect risk from the article:
”That said, we still think this risk is relatively low. If climate change poses something like a 1 in 10,000 risk of extinction by itself, our guess is that its contribution to other existential risks is at most an order of magnitude higher — so something like 1 in 1,000.25“
And then footnote 25
”How should you think about indirect risk factors? One heuristic is that it’s more important to work on indirect risk factors when they seem to be worsening many more direct problems at once and in different ways. By analogy, imagine you’re in a company and many of your revenue streams are failing for seemingly different reasons. Could it be your company culture making things less likely to work smoothly? It might be most efficient to address that rather than the many different revenue problems, even though it’s upstream and therefore less direct.
But this doesn’t seem to be the case with climate change and direct extinction risks — there aren’t many different ways for humanity to go extinct, at least as far as we can tell. So it’s less important to reduce upstream issues that could be making them worse vs trying to fix them directly. This means that if you’re chiefly worried about how climate change might increase the chance of a catastrophic global pandemic, it seems sensible to focus directly on how we prevent catastrophic global pandemics, or perhaps the intersection of the two issues, vs focusing primarily on climate change.↩”
Problems with this:
1. There is no basis on which to infer to a magnitude relationship between direct existential risk of climate and indirect existential risk of climate. It is totally possible for climate to have a very low probability of being a direct existential risk while still being a significant indirect existential risk factor, as far as I can see there is no substantive argument for the two to not vary by more than 1 order of magnitude.
2. There is equally no basis, as far as I can tell, for working directly on the problem. What the heuristic impliclty assumes to be true, without justification as far as I can tell, is that problems are solvable by direct work. This is not necessarily true, for example it is perfectly conceivable that AI safety outcomes depend entirely on the state of geopolitics in 2035 and that direct domain-specific work has no effect at all. This is an extreme example, but there seems to be no argument why we should assume a specific and definitely higher share of solvability from direct work.
I don’t think the post ignores indirect risks. It says “For more, including the importance of indirect impacts of climate change, and our climate change career recommendations, see the full profile.”
As I understand the argument from indirect risk, the claim is that climate change is a very large and important stressor of great power war, nuclear war, biorisk and AI. Firstly, I have never seen anyone argue that the best way to reduce biorisk or AI is to work on climate change.
Secondly, climate change is not an important determinant of Great Power War, according to all theories of Great Power War. The Great Power Wars that EAs most worry about are between US and China and US and Russia. The main posited drivers of these conflicts are one power surpassing the other in geopolitical status (the Thucydides trap); defence agreements made over contested territories like Ukraine and Taiwain; and accidental launches of nuclear weapons due to wrongly perceived first strike. It’s hard to see how climate change is an important driver of any of these mechanisms.
I think it’s important to see the nuance of the disagreement here.
1. My critique is of what strikes me as overconfident and overconfidently stated reasoning on what seems a critical point in the overall prioritization of climate—as Haydn writes, few sophisticated people buy the “climate is a direct extinction risk”, so while this is a good hook it is not where the steelmanned case for climate concern is and, whatever one assumes the exact amount of risk to be, indirect existential risk plausibly is the majority of badness from climate from a longtermist lens.
2. My critique does not imply and I have never said that we should work on climate change to address biorisk. The reasoning of the article can be poor and this can be critiqued while the conclusion might still be roughly right.
3. That said, work on existential risk factor is quite under-developed methodologically so I would not update much from what has been said on that so far, I think this is what footnote 25 also shows, the mental model on indirect risks is not very useful / imposes a particular simplified problem structure which might be importantly wrong.
4. As you know, I broadly agree with you that a lot of the climate impacts literature is overly alarmist, but I still think you seem too confident on indirect risks, there are many ways in which climate could be quite bad as a risk factor, e.g. perceived climate injustice could matter for bio-terrorism, or there could be geopolitical destabilization and knock-on effects in relevant regions such as South Asia.
I agree it is not where the action is but given that large sections of the public think we are going to die in the next few decades from climate change, it makes lots of sense to discuss it. And, the piece makes a novel contribution on that question, which is an update from previous EA wisdom.
I took it that the claim in the discussed footnote is that working on climate is not the best way to tackled pandemics, which I think we agree is true.
I agree that it is a risk factor in the sense that it is socially costly. But so are many things. Inadequate pricing of water is a risk factor. Sri Lanka’s decision to ban chemical fertiliser is a risk factor. Indian nationalism is a risk factor. etc. In general, bad economic policies are risk factors. The question is: is the risk factor big enough to change the priority cause ranking for EAs? I really struggle to see how it is. Like, it is true that perceived climate injustice in South Asia could matter for bioterrorism but this is very very far down the list of levers on biorisk.
Pretty sure jackva is responding to the linked article, not just this post, as e.g. they quote footnote 25 in full.
On first point, I think that that kind of argument could be found in Jonathan B. Wiener’s work on “‘risk-superior moves’—better options that reduce multiple risks in concert.” See e.g.
Learning to Manage the Multirisk World
The Tragedy of the Uncommons: On the Politics of Apocalypse
On the second point, what about climate change in India-Pakistan? e.g. an event worse than the current terrible heatwave—heat stress and agriculture/economic shock leads to migration, instability, rise in tension and accidental use of nuclear weapons. The recent modelling papers indicate that would lead to ‘nuclear autumn’ and probably be a global catastrophe.
A regional nuclear conflict would compromise global food security (2020)
Economic incentives modify agricultural impacts of nuclear war (2022)
(In that case, he said that the post ignores indirect risks, which isn’t true.)
On your first point, my claim was “I have never seen anyone argue that the best way to reduce biorisk or AI is to work on climate change”. The papers you shared also do not make this argument. I’m not saying that it is conceptually impossible for working on one risk to be the best way to work on another risk. Obviously, it is possible. I am just saying it is not substantively true about climate on the one hand, and AI and bio on the other. To me, it is clearly absurd to hold that the best way to work on these problems is by working on climate change.
On your second point, I agree that climate change could be a stressor of some conflict risks in the same way that anything that is socially bad can be a stressor of conflict risks. For example, inadequate pricing of water is also a stressor of India-Pakistan conflict risk for the same reason. But this still does not show that it is literally the best possible way to reduce the risk of that conflict. It would be very surprising if it were since there is no evidence in the literature of climate change causing interstate warfare. Also, even the path from India-Pakistan conflict to long-run disaster seems extremely indirect, and permanent collapse or something like that seems extremely unlikely.
Haydn, would you be able to quantify the probability that, in your assessment, climate change will indirectly cause human extinction this century, relative to biorisk? Benjamin Hilton speculates that it’s less than 0.1x, but it’s not clear to me whether you disagree with this estimate (assuming you do) because you think it’s closer to 0.3x, 1x, or 3x. Having more clarity on this would help me understand this discussion better, I think.