Strongly agree with Haydn here on the critique. Indeed, focusing primarily on direct risks and ignoring the indirect risks or, worse, making a claim about the size of the indirect risks that has no basis in anything but stating it confidently really seems unfortunate, as it feels like a strawman.
Justification for low indirect risk from the article: ”That said, we still think this risk is relatively low. If climate change poses something like a 1 in 10,000 risk of extinction by itself, our guess is that its contribution to other existential risks is at most an order of magnitude higher — so something like 1 in 1,000.25“
And then footnote 25 ”How should you think about indirect risk factors? One heuristic is that it’s more important to work on indirect risk factors when they seem to be worsening many more direct problems at once and in different ways. By analogy, imagine you’re in a company and many of your revenue streams are failing for seemingly different reasons. Could it be your company culture making things less likely to work smoothly? It might be most efficient to address that rather than the many different revenue problems, even though it’s upstream and therefore less direct.
But this doesn’t seem to be the case with climate change and direct extinction risks — there aren’t many different ways for humanity to go extinct, at least as far as we can tell. So it’s less important to reduce upstream issues that could be making them worse vs trying to fix them directly. This means that if you’re chiefly worried about how climate change might increase the chance of a catastrophic global pandemic, it seems sensible to focus directly on how we prevent catastrophic global pandemics, or perhaps the intersection of the two issues, vs focusing primarily on climate change.↩”
Problems with this: 1. There is no basis on which to infer to a magnitude relationship between direct existential risk of climate and indirect existential risk of climate. It is totally possible for climate to have a very low probability of being a direct existential risk while still being a significant indirect existential risk factor, as far as I can see there is no substantive argument for the two to not vary by more than 1 order of magnitude.
2. There is equally no basis, as far as I can tell, for working directly on the problem. What the heuristic impliclty assumes to be true, without justification as far as I can tell, is that problems are solvable by direct work. This is not necessarily true, for example it is perfectly conceivable that AI safety outcomes depend entirely on the state of geopolitics in 2035 and that direct domain-specific work has no effect at all. This is an extreme example, but there seems to be no argument why we should assume a specific and definitely higher share of solvability from direct work.
I don’t think the post ignores indirect risks. It says “For more, including the importance of indirect impacts of climate change, and our climate change career recommendations, see the full profile.”
As I understand the argument from indirect risk, the claim is that climate change is a very large and important stressor of great power war, nuclear war, biorisk and AI. Firstly, I have never seen anyone argue that the best way to reduce biorisk or AI is to work on climate change.
Secondly, climate change is not an important determinant of Great Power War, according to all theories of Great Power War. The Great Power Wars that EAs most worry about are between US and China and US and Russia. The main posited drivers of these conflicts are one power surpassing the other in geopolitical status (the Thucydides trap); defence agreements made over contested territories like Ukraine and Taiwain; and accidental launches of nuclear weapons due to wrongly perceived first strike. It’s hard to see how climate change is an important driver of any of these mechanisms.
I think it’s important to see the nuance of the disagreement here.
1. My critique is of what strikes me as overconfident and overconfidently stated reasoning on what seems a critical point in the overall prioritization of climate—as Haydn writes, few sophisticated people buy the “climate is a direct extinction risk”, so while this is a good hook it is not where the steelmanned case for climate concern is and, whatever one assumes the exact amount of risk to be, indirect existential risk plausibly is the majority of badness from climate from a longtermist lens.
2. My critique does not imply and I have never said that we should work on climate change to address biorisk. The reasoning of the article can be poor and this can be critiqued while the conclusion might still be roughly right.
3. That said, work on existential risk factor is quite under-developed methodologically so I would not update much from what has been said on that so far, I think this is what footnote 25 also shows, the mental model on indirect risks is not very useful / imposes a particular simplified problem structure which might be importantly wrong.
4. As you know, I broadly agree with you that a lot of the climate impacts literature is overly alarmist, but I still think you seem too confident on indirect risks, there are many ways in which climate could be quite bad as a risk factor, e.g. perceived climate injustice could matter for bio-terrorism, or there could be geopolitical destabilization and knock-on effects in relevant regions such as South Asia.
I agree it is not where the action is but given that large sections of the public think we are going to die in the next few decades from climate change, it makes lots of sense to discuss it. And, the piece makes a novel contribution on that question, which is an update from previous EA wisdom.
I took it that the claim in the discussed footnote is that working on climate is not the best way to tackled pandemics, which I think we agree is true.
I agree that it is a risk factor in the sense that it is socially costly. But so are many things. Inadequate pricing of water is a risk factor. Sri Lanka’s decision to ban chemical fertiliser is a risk factor. Indian nationalism is a risk factor. etc. In general, bad economic policies are risk factors. The question is: is the risk factor big enough to change the priority cause ranking for EAs? I really struggle to see how it is. Like, it is true that perceived climate injustice in South Asia could matter for bioterrorism but this is very very far down the list of levers on biorisk.
Pretty sure jackva is responding to the linked article, not just this post, as e.g. they quote footnote 25 in full.
On first point, I think that that kind of argument could be found in Jonathan B. Wiener’s work on “‘risk-superior moves’—better options that reduce multiple risks in concert.” See e.g.
On the second point, what about climate change in India-Pakistan? e.g. an event worse than the current terrible heatwave—heat stress and agriculture/economic shock leads to migration, instability, rise in tension and accidental use of nuclear weapons. The recent modelling papers indicate that would lead to ‘nuclear autumn’ and probably be a global catastrophe.
(In that case, he said that the post ignores indirect risks, which isn’t true.)
On your first point, my claim was “I have never seen anyone argue that the best way to reduce biorisk or AI is to work on climate change”. The papers you shared also do not make this argument. I’m not saying that it is conceptually impossible for working on one risk to be the best way to work on another risk. Obviously, it is possible. I am just saying it is not substantively true about climate on the one hand, and AI and bio on the other. To me, it is clearly absurd to hold that the best way to work on these problems is by working on climate change.
On your second point, I agree that climate change could be a stressor of some conflict risks in the same way that anything that is socially bad can be a stressor of conflict risks. For example, inadequate pricing of water is also a stressor of India-Pakistan conflict risk for the same reason. But this still does not show that it is literally the best possible way to reduce the risk of that conflict. It would be very surprising if it were since there is no evidence in the literature of climate change causing interstate warfare. Also, even the path from India-Pakistan conflict to long-run disaster seems extremely indirect, and permanent collapse or something like that seems extremely unlikely.
Strongly agree with Haydn here on the critique. Indeed, focusing primarily on direct risks and ignoring the indirect risks or, worse, making a claim about the size of the indirect risks that has no basis in anything but stating it confidently really seems unfortunate, as it feels like a strawman.
Justification for low indirect risk from the article:
”That said, we still think this risk is relatively low. If climate change poses something like a 1 in 10,000 risk of extinction by itself, our guess is that its contribution to other existential risks is at most an order of magnitude higher — so something like 1 in 1,000.25“
And then footnote 25
”How should you think about indirect risk factors? One heuristic is that it’s more important to work on indirect risk factors when they seem to be worsening many more direct problems at once and in different ways. By analogy, imagine you’re in a company and many of your revenue streams are failing for seemingly different reasons. Could it be your company culture making things less likely to work smoothly? It might be most efficient to address that rather than the many different revenue problems, even though it’s upstream and therefore less direct.
But this doesn’t seem to be the case with climate change and direct extinction risks — there aren’t many different ways for humanity to go extinct, at least as far as we can tell. So it’s less important to reduce upstream issues that could be making them worse vs trying to fix them directly. This means that if you’re chiefly worried about how climate change might increase the chance of a catastrophic global pandemic, it seems sensible to focus directly on how we prevent catastrophic global pandemics, or perhaps the intersection of the two issues, vs focusing primarily on climate change.↩”
Problems with this:
1. There is no basis on which to infer to a magnitude relationship between direct existential risk of climate and indirect existential risk of climate. It is totally possible for climate to have a very low probability of being a direct existential risk while still being a significant indirect existential risk factor, as far as I can see there is no substantive argument for the two to not vary by more than 1 order of magnitude.
2. There is equally no basis, as far as I can tell, for working directly on the problem. What the heuristic impliclty assumes to be true, without justification as far as I can tell, is that problems are solvable by direct work. This is not necessarily true, for example it is perfectly conceivable that AI safety outcomes depend entirely on the state of geopolitics in 2035 and that direct domain-specific work has no effect at all. This is an extreme example, but there seems to be no argument why we should assume a specific and definitely higher share of solvability from direct work.
I don’t think the post ignores indirect risks. It says “For more, including the importance of indirect impacts of climate change, and our climate change career recommendations, see the full profile.”
As I understand the argument from indirect risk, the claim is that climate change is a very large and important stressor of great power war, nuclear war, biorisk and AI. Firstly, I have never seen anyone argue that the best way to reduce biorisk or AI is to work on climate change.
Secondly, climate change is not an important determinant of Great Power War, according to all theories of Great Power War. The Great Power Wars that EAs most worry about are between US and China and US and Russia. The main posited drivers of these conflicts are one power surpassing the other in geopolitical status (the Thucydides trap); defence agreements made over contested territories like Ukraine and Taiwain; and accidental launches of nuclear weapons due to wrongly perceived first strike. It’s hard to see how climate change is an important driver of any of these mechanisms.
I think it’s important to see the nuance of the disagreement here.
1. My critique is of what strikes me as overconfident and overconfidently stated reasoning on what seems a critical point in the overall prioritization of climate—as Haydn writes, few sophisticated people buy the “climate is a direct extinction risk”, so while this is a good hook it is not where the steelmanned case for climate concern is and, whatever one assumes the exact amount of risk to be, indirect existential risk plausibly is the majority of badness from climate from a longtermist lens.
2. My critique does not imply and I have never said that we should work on climate change to address biorisk. The reasoning of the article can be poor and this can be critiqued while the conclusion might still be roughly right.
3. That said, work on existential risk factor is quite under-developed methodologically so I would not update much from what has been said on that so far, I think this is what footnote 25 also shows, the mental model on indirect risks is not very useful / imposes a particular simplified problem structure which might be importantly wrong.
4. As you know, I broadly agree with you that a lot of the climate impacts literature is overly alarmist, but I still think you seem too confident on indirect risks, there are many ways in which climate could be quite bad as a risk factor, e.g. perceived climate injustice could matter for bio-terrorism, or there could be geopolitical destabilization and knock-on effects in relevant regions such as South Asia.
I agree it is not where the action is but given that large sections of the public think we are going to die in the next few decades from climate change, it makes lots of sense to discuss it. And, the piece makes a novel contribution on that question, which is an update from previous EA wisdom.
I took it that the claim in the discussed footnote is that working on climate is not the best way to tackled pandemics, which I think we agree is true.
I agree that it is a risk factor in the sense that it is socially costly. But so are many things. Inadequate pricing of water is a risk factor. Sri Lanka’s decision to ban chemical fertiliser is a risk factor. Indian nationalism is a risk factor. etc. In general, bad economic policies are risk factors. The question is: is the risk factor big enough to change the priority cause ranking for EAs? I really struggle to see how it is. Like, it is true that perceived climate injustice in South Asia could matter for bioterrorism but this is very very far down the list of levers on biorisk.
Pretty sure jackva is responding to the linked article, not just this post, as e.g. they quote footnote 25 in full.
On first point, I think that that kind of argument could be found in Jonathan B. Wiener’s work on “‘risk-superior moves’—better options that reduce multiple risks in concert.” See e.g.
Learning to Manage the Multirisk World
The Tragedy of the Uncommons: On the Politics of Apocalypse
On the second point, what about climate change in India-Pakistan? e.g. an event worse than the current terrible heatwave—heat stress and agriculture/economic shock leads to migration, instability, rise in tension and accidental use of nuclear weapons. The recent modelling papers indicate that would lead to ‘nuclear autumn’ and probably be a global catastrophe.
A regional nuclear conflict would compromise global food security (2020)
Economic incentives modify agricultural impacts of nuclear war (2022)
(In that case, he said that the post ignores indirect risks, which isn’t true.)
On your first point, my claim was “I have never seen anyone argue that the best way to reduce biorisk or AI is to work on climate change”. The papers you shared also do not make this argument. I’m not saying that it is conceptually impossible for working on one risk to be the best way to work on another risk. Obviously, it is possible. I am just saying it is not substantively true about climate on the one hand, and AI and bio on the other. To me, it is clearly absurd to hold that the best way to work on these problems is by working on climate change.
On your second point, I agree that climate change could be a stressor of some conflict risks in the same way that anything that is socially bad can be a stressor of conflict risks. For example, inadequate pricing of water is also a stressor of India-Pakistan conflict risk for the same reason. But this still does not show that it is literally the best possible way to reduce the risk of that conflict. It would be very surprising if it were since there is no evidence in the literature of climate change causing interstate warfare. Also, even the path from India-Pakistan conflict to long-run disaster seems extremely indirect, and permanent collapse or something like that seems extremely unlikely.