I don’t think the post ignores indirect risks. It says “For more, including the importance of indirect impacts of climate change, and our climate change career recommendations, see the full profile.”
As I understand the argument from indirect risk, the claim is that climate change is a very large and important stressor of great power war, nuclear war, biorisk and AI. Firstly, I have never seen anyone argue that the best way to reduce biorisk or AI is to work on climate change.
Secondly, climate change is not an important determinant of Great Power War, according to all theories of Great Power War. The Great Power Wars that EAs most worry about are between US and China and US and Russia. The main posited drivers of these conflicts are one power surpassing the other in geopolitical status (the Thucydides trap); defence agreements made over contested territories like Ukraine and Taiwain; and accidental launches of nuclear weapons due to wrongly perceived first strike. It’s hard to see how climate change is an important driver of any of these mechanisms.
I think it’s important to see the nuance of the disagreement here.
1. My critique is of what strikes me as overconfident and overconfidently stated reasoning on what seems a critical point in the overall prioritization of climate—as Haydn writes, few sophisticated people buy the “climate is a direct extinction risk”, so while this is a good hook it is not where the steelmanned case for climate concern is and, whatever one assumes the exact amount of risk to be, indirect existential risk plausibly is the majority of badness from climate from a longtermist lens.
2. My critique does not imply and I have never said that we should work on climate change to address biorisk. The reasoning of the article can be poor and this can be critiqued while the conclusion might still be roughly right.
3. That said, work on existential risk factor is quite under-developed methodologically so I would not update much from what has been said on that so far, I think this is what footnote 25 also shows, the mental model on indirect risks is not very useful / imposes a particular simplified problem structure which might be importantly wrong.
4. As you know, I broadly agree with you that a lot of the climate impacts literature is overly alarmist, but I still think you seem too confident on indirect risks, there are many ways in which climate could be quite bad as a risk factor, e.g. perceived climate injustice could matter for bio-terrorism, or there could be geopolitical destabilization and knock-on effects in relevant regions such as South Asia.
I agree it is not where the action is but given that large sections of the public think we are going to die in the next few decades from climate change, it makes lots of sense to discuss it. And, the piece makes a novel contribution on that question, which is an update from previous EA wisdom.
I took it that the claim in the discussed footnote is that working on climate is not the best way to tackled pandemics, which I think we agree is true.
I agree that it is a risk factor in the sense that it is socially costly. But so are many things. Inadequate pricing of water is a risk factor. Sri Lanka’s decision to ban chemical fertiliser is a risk factor. Indian nationalism is a risk factor. etc. In general, bad economic policies are risk factors. The question is: is the risk factor big enough to change the priority cause ranking for EAs? I really struggle to see how it is. Like, it is true that perceived climate injustice in South Asia could matter for bioterrorism but this is very very far down the list of levers on biorisk.
Pretty sure jackva is responding to the linked article, not just this post, as e.g. they quote footnote 25 in full.
On first point, I think that that kind of argument could be found in Jonathan B. Wiener’s work on “‘risk-superior moves’—better options that reduce multiple risks in concert.” See e.g.
On the second point, what about climate change in India-Pakistan? e.g. an event worse than the current terrible heatwave—heat stress and agriculture/economic shock leads to migration, instability, rise in tension and accidental use of nuclear weapons. The recent modelling papers indicate that would lead to ‘nuclear autumn’ and probably be a global catastrophe.
(In that case, he said that the post ignores indirect risks, which isn’t true.)
On your first point, my claim was “I have never seen anyone argue that the best way to reduce biorisk or AI is to work on climate change”. The papers you shared also do not make this argument. I’m not saying that it is conceptually impossible for working on one risk to be the best way to work on another risk. Obviously, it is possible. I am just saying it is not substantively true about climate on the one hand, and AI and bio on the other. To me, it is clearly absurd to hold that the best way to work on these problems is by working on climate change.
On your second point, I agree that climate change could be a stressor of some conflict risks in the same way that anything that is socially bad can be a stressor of conflict risks. For example, inadequate pricing of water is also a stressor of India-Pakistan conflict risk for the same reason. But this still does not show that it is literally the best possible way to reduce the risk of that conflict. It would be very surprising if it were since there is no evidence in the literature of climate change causing interstate warfare. Also, even the path from India-Pakistan conflict to long-run disaster seems extremely indirect, and permanent collapse or something like that seems extremely unlikely.
I don’t think the post ignores indirect risks. It says “For more, including the importance of indirect impacts of climate change, and our climate change career recommendations, see the full profile.”
As I understand the argument from indirect risk, the claim is that climate change is a very large and important stressor of great power war, nuclear war, biorisk and AI. Firstly, I have never seen anyone argue that the best way to reduce biorisk or AI is to work on climate change.
Secondly, climate change is not an important determinant of Great Power War, according to all theories of Great Power War. The Great Power Wars that EAs most worry about are between US and China and US and Russia. The main posited drivers of these conflicts are one power surpassing the other in geopolitical status (the Thucydides trap); defence agreements made over contested territories like Ukraine and Taiwain; and accidental launches of nuclear weapons due to wrongly perceived first strike. It’s hard to see how climate change is an important driver of any of these mechanisms.
I think it’s important to see the nuance of the disagreement here.
1. My critique is of what strikes me as overconfident and overconfidently stated reasoning on what seems a critical point in the overall prioritization of climate—as Haydn writes, few sophisticated people buy the “climate is a direct extinction risk”, so while this is a good hook it is not where the steelmanned case for climate concern is and, whatever one assumes the exact amount of risk to be, indirect existential risk plausibly is the majority of badness from climate from a longtermist lens.
2. My critique does not imply and I have never said that we should work on climate change to address biorisk. The reasoning of the article can be poor and this can be critiqued while the conclusion might still be roughly right.
3. That said, work on existential risk factor is quite under-developed methodologically so I would not update much from what has been said on that so far, I think this is what footnote 25 also shows, the mental model on indirect risks is not very useful / imposes a particular simplified problem structure which might be importantly wrong.
4. As you know, I broadly agree with you that a lot of the climate impacts literature is overly alarmist, but I still think you seem too confident on indirect risks, there are many ways in which climate could be quite bad as a risk factor, e.g. perceived climate injustice could matter for bio-terrorism, or there could be geopolitical destabilization and knock-on effects in relevant regions such as South Asia.
I agree it is not where the action is but given that large sections of the public think we are going to die in the next few decades from climate change, it makes lots of sense to discuss it. And, the piece makes a novel contribution on that question, which is an update from previous EA wisdom.
I took it that the claim in the discussed footnote is that working on climate is not the best way to tackled pandemics, which I think we agree is true.
I agree that it is a risk factor in the sense that it is socially costly. But so are many things. Inadequate pricing of water is a risk factor. Sri Lanka’s decision to ban chemical fertiliser is a risk factor. Indian nationalism is a risk factor. etc. In general, bad economic policies are risk factors. The question is: is the risk factor big enough to change the priority cause ranking for EAs? I really struggle to see how it is. Like, it is true that perceived climate injustice in South Asia could matter for bioterrorism but this is very very far down the list of levers on biorisk.
Pretty sure jackva is responding to the linked article, not just this post, as e.g. they quote footnote 25 in full.
On first point, I think that that kind of argument could be found in Jonathan B. Wiener’s work on “‘risk-superior moves’—better options that reduce multiple risks in concert.” See e.g.
Learning to Manage the Multirisk World
The Tragedy of the Uncommons: On the Politics of Apocalypse
On the second point, what about climate change in India-Pakistan? e.g. an event worse than the current terrible heatwave—heat stress and agriculture/economic shock leads to migration, instability, rise in tension and accidental use of nuclear weapons. The recent modelling papers indicate that would lead to ‘nuclear autumn’ and probably be a global catastrophe.
A regional nuclear conflict would compromise global food security (2020)
Economic incentives modify agricultural impacts of nuclear war (2022)
(In that case, he said that the post ignores indirect risks, which isn’t true.)
On your first point, my claim was “I have never seen anyone argue that the best way to reduce biorisk or AI is to work on climate change”. The papers you shared also do not make this argument. I’m not saying that it is conceptually impossible for working on one risk to be the best way to work on another risk. Obviously, it is possible. I am just saying it is not substantively true about climate on the one hand, and AI and bio on the other. To me, it is clearly absurd to hold that the best way to work on these problems is by working on climate change.
On your second point, I agree that climate change could be a stressor of some conflict risks in the same way that anything that is socially bad can be a stressor of conflict risks. For example, inadequate pricing of water is also a stressor of India-Pakistan conflict risk for the same reason. But this still does not show that it is literally the best possible way to reduce the risk of that conflict. It would be very surprising if it were since there is no evidence in the literature of climate change causing interstate warfare. Also, even the path from India-Pakistan conflict to long-run disaster seems extremely indirect, and permanent collapse or something like that seems extremely unlikely.