I thought about this for ~4 hours. My current position is that a lot of these claims seem dubious (I doubt many of them would stand up to Fermi estimates), but several people should be working in political stabilization efforts, and it makes sense for at least one of them to be thinking about climate, whether or not this is framed as “climate resilience”. The positive components of the vibe of this post reminded me of SBF’s goals, putting the world in a broadly better place to deal with x-risks.
In particular, I’m skeptical of the pathway from (1) climate change → (2) global extremism and instability → (3) lethal autonomous weapon development → (4) AI x-risk.
First note that this pathway has 4 steps, which is pretty indirect. Looking at each of the steps individually:
(1) → (2): I think experts are mixed on whether resource shortages cause war of the type that can lead to (3). War is a failure of bargaining, so anything that increases war must either shift the game theory or cause decision-makers to become more irrational, not just shrink the pool of available resources. Quoting from the 80k podcast episode with economist / political scientist Chris Blattman:
Rob Wiblin: Yeah. Some other drivers of war that I hear people talk about that you’re skeptical of include climate change and water scarcity. Can you talk about why it is that you’re skeptical of this idea of water wars?
Chris Blattman: So I think scarce water, any scarce resource, is something which we’re going to compete over. If there’s a little bit, we’ll compete over it. If there’s a lot of it, we’ll still probably find a way to compete over it. And the competition is still going to be costly. So we’re always going to strenuously compete. It’ll be hostile, it’ll be bitter, but it shouldn’t be violent. And the fact that water becomes more scarce — like any resource that becomes more scarce — doesn’t take away from the fact that it’s still costly to fight over it. There’s always room for that deal. The fact that our water is shrinking in some places, we have to be skeptical. So what is actually causing this? And then empirically, I think when people take a good look at this and they actually look at all these counterfactual cases where there’s water and war didn’t break out, we just don’t see that water scarcity is a persistent driver of war.
Chris Blattman: The same is a little bit true of climate change. The theory is sort of the same. How things getting hotter or colder affects interpersonal violence is pretty clear, but why it should affect sustained yearslong warfare is far less clear. That said, unlike water wars, the empirical evidence is a little bit stronger that something’s going on. But to me, it’s just then a bit of a puzzle that still needs to be sorted out. Because once again, the fact that we’re getting jostled by unexpected temperature shocks, unexpected weather events, it’s not clear why that should lead to sustained political competition through violence, rather than finding some bargain solution.
(2) → (3): It’s not clear to me that global extremism and instability cause markedly greater investment into lethal autonomous weapons. The US has been using Predator drones constantly since 1995, independently of several shifts in extremism, just because they’re effective; it’s not clear why this would change for more autonomous weapons. More of the variance in autonomous weapon development seems to be from how much attention/funding goes to autonomous weapons as a percentage of world military budgets rather than the overall militarization of the world. As for terrorism, I doubt most terrorist groups have the capacity to develop cutting-edge autonomous weapon technology.
(3) → (4): You write “In the context of AI alignment, often a distinction is drawn between misuse (bad intentions to begin with)and misalignment (good intentions gone awry). However, I believe the greatest risk is the combination of both: a malicious intention to kill an enemy population (misuse), which then slightly misinterprets that mission and perhaps takes it one step further (misalignment into x-risk possibilities).” Given that we currently can’t steer a sufficiently advanced AI system at anything, plus there are sufficient economic pressures to develop goal-directed AGI for other reasons, I disagree that this is the greatest risk.
Each of the links in the chain is reasonable, but the full story seems altogether too long to be a major driver of x-risk. If you have 70% credence in the sign of each step independently, the credence in the 3-step argument goes down to 34%. Maybe you have a lower confidence than the wording implies though.
Hey Thomas! Love the feedback & follow up form the conversation. Thanks for taking so much time to think this over—this is really well-researched. :)
In response to your arguments:
1 → 2 is generally well established by climate literature. I think the quote you provided gives me good reasons for why climate war may not be perfectly rational; however, humans don’t act in a perfectly rational way.
There are clear historical correlations that exist between rainfall patterns and civil tensions, expert opinions on climate causing violent conflict, etc. I’d like to reemphasize that climate conflict is often not just driven by resource scarcity dynamics, but also amplified by the irrational mentalities (e.g. they’ve stolen from us, they hate us, us vs them) that has driven humanity to the state of war for the many decades before. There is a unique blend of rational and irrational calculations that play into conflict risk.
2 → 3 → 4 is absolutely tenuous because our systems have rarely been stressed to this extent, so little to no historical precedence exists. However, this climate tension also acts in non-linear ways with other elements of technological development—e.g. international AGI governance efforts may be significantly harder to do between politically extreme governments and in the context of rising social tension.
To address the “greatest risk” point for 3 → 4, I agree and/or concede because my opinions have changed since the time I’ve written this as I’ve talked to more researchers in the AI alignment space.
From linkchain framing to systems thinking:
This specific 1->2->3->4 pathway causing directly existential risk may feel unlikely—and it is (alone). However, the emphasis I’d like to make is that there is a category of (usually politically-related risks) that have the potential to cascade through systems in a rather dangerous, non-linear, volatile manner.
These systemic cascading risks are better visualized not as a linear linkchain where A affects B affects C affects D (because this only captures one possible linkage chain and no interwoven or cascading effects), but rather as a graph of interconnected socioeconomic systems where one stresses a subset of nodes and studies how this stressor affects the system. How strong the butterfly effect is depends on the vulnerability and resiliency of its institutions; thus, I aim to advocate for more resilient institutions to counter these risks.
I agree that 2 --> 3 --> 4 is tenuous but I think 1 --> 2 is very well-established. The climate-conflict literature is pretty definitive that increases in temperature lead to increases in conflict (see Burke, Hsiang and Miguel 2015) and not just at the small scale. Even under Blattman’s theory, climate --> conflict doesn’t rely on decisionmakers becoming more irrational or uncooperative in any way. It simply relies on them being unable to overcome the tension of resource scarcity with their existing level of cooperation/rationality. A fragile peace bargain can be tipped by shortages, even if it would otherwise have succeeded.
I thought about this for ~4 hours. My current position is that a lot of these claims seem dubious (I doubt many of them would stand up to Fermi estimates), but several people should be working in political stabilization efforts, and it makes sense for at least one of them to be thinking about climate, whether or not this is framed as “climate resilience”. The positive components of the vibe of this post reminded me of SBF’s goals, putting the world in a broadly better place to deal with x-risks.
In particular, I’m skeptical of the pathway from (1) climate change → (2) global extremism and instability → (3) lethal autonomous weapon development → (4) AI x-risk.
First note that this pathway has 4 steps, which is pretty indirect. Looking at each of the steps individually:
(1) → (2): I think experts are mixed on whether resource shortages cause war of the type that can lead to (3). War is a failure of bargaining, so anything that increases war must either shift the game theory or cause decision-makers to become more irrational, not just shrink the pool of available resources. Quoting from the 80k podcast episode with economist / political scientist Chris Blattman:
(2) → (3): It’s not clear to me that global extremism and instability cause markedly greater investment into lethal autonomous weapons. The US has been using Predator drones constantly since 1995, independently of several shifts in extremism, just because they’re effective; it’s not clear why this would change for more autonomous weapons. More of the variance in autonomous weapon development seems to be from how much attention/funding goes to autonomous weapons as a percentage of world military budgets rather than the overall militarization of the world. As for terrorism, I doubt most terrorist groups have the capacity to develop cutting-edge autonomous weapon technology.
(3) → (4): You write “In the context of AI alignment, often a distinction is drawn between misuse (bad intentions to begin with) and misalignment (good intentions gone awry). However, I believe the greatest risk is the combination of both: a malicious intention to kill an enemy population (misuse), which then slightly misinterprets that mission and perhaps takes it one step further (misalignment into x-risk possibilities).” Given that we currently can’t steer a sufficiently advanced AI system at anything, plus there are sufficient economic pressures to develop goal-directed AGI for other reasons, I disagree that this is the greatest risk.
Each of the links in the chain is reasonable, but the full story seems altogether too long to be a major driver of x-risk. If you have 70% credence in the sign of each step independently, the credence in the 3-step argument goes down to 34%. Maybe you have a lower confidence than the wording implies though.
Hey Thomas! Love the feedback & follow up form the conversation. Thanks for taking so much time to think this over—this is really well-researched. :)
In response to your arguments:
1 → 2 is generally well established by climate literature. I think the quote you provided gives me good reasons for why climate war may not be perfectly rational; however, humans don’t act in a perfectly rational way.
There are clear historical correlations that exist between rainfall patterns and civil tensions, expert opinions on climate causing violent conflict, etc. I’d like to reemphasize that climate conflict is often not just driven by resource scarcity dynamics, but also amplified by the irrational mentalities (e.g. they’ve stolen from us, they hate us, us vs them) that has driven humanity to the state of war for the many decades before. There is a unique blend of rational and irrational calculations that play into conflict risk.
2 → 3 → 4 is absolutely tenuous because our systems have rarely been stressed to this extent, so little to no historical precedence exists. However, this climate tension also acts in non-linear ways with other elements of technological development—e.g. international AGI governance efforts may be significantly harder to do between politically extreme governments and in the context of rising social tension.
To address the “greatest risk” point for 3 → 4, I agree and/or concede because my opinions have changed since the time I’ve written this as I’ve talked to more researchers in the AI alignment space.
From linkchain framing to systems thinking:
This specific 1->2->3->4 pathway causing directly existential risk may feel unlikely—and it is (alone). However, the emphasis I’d like to make is that there is a category of (usually politically-related risks) that have the potential to cascade through systems in a rather dangerous, non-linear, volatile manner.
These systemic cascading risks are better visualized not as a linear linkchain where A affects B affects C affects D (because this only captures one possible linkage chain and no interwoven or cascading effects), but rather as a graph of interconnected socioeconomic systems where one stresses a subset of nodes and studies how this stressor affects the system. How strong the butterfly effect is depends on the vulnerability and resiliency of its institutions; thus, I aim to advocate for more resilient institutions to counter these risks.
I agree that 2 --> 3 --> 4 is tenuous but I think 1 --> 2 is very well-established. The climate-conflict literature is pretty definitive that increases in temperature lead to increases in conflict (see Burke, Hsiang and Miguel 2015) and not just at the small scale. Even under Blattman’s theory, climate --> conflict doesn’t rely on decisionmakers becoming more irrational or uncooperative in any way. It simply relies on them being unable to overcome the tension of resource scarcity with their existing level of cooperation/rationality. A fragile peace bargain can be tipped by shortages, even if it would otherwise have succeeded.