FWIW, Founders Pledge’s climate work is explicitly focused on supporting solutions that work in the worst worlds (minimizing expected climate damage) and we’re thinking a lot about many of those issues both from a solution angle and a cause prioritization angle (I think the existential risk factors you allude to are by far the most important reasons to care about climate from a longtermist lens).
That being said, you are making a lot of very strong claims based on fairly limited evidence and it would require significantly more work to get credible estimates on the various risk pathways and to justify confident statements about large effect sizes. It also seems that the sources do not fully support the claims, e.g. I clicked on the link to justify the estimate for IPCC underestimating climate impacts which led me to a study which is 10 years old, i.e. essentially carries 0 information for the current state on that question. So I’d be careful to jump to quite extreme (in the sense of confident) conclusions.
Given the vast uncertainties and the heterogeneity of climate-relevant science one can justify almost any conclusion from cherry-picking a subset from the peer-reviewed literature, so it’s really crucial to consider review papers and a wide range of estimates before making bold claims.
Hey Johannes! I really appreciate the feedback, and I love the work you guys are doing through Founder’s Pledge. I appreciate that you also believe sociopolitical existential risk factors are an important element worth consideration.
I wish there was a lot more quantitative evidence on sociopolitical climate risk — I had to lean to a lot of qualitative expert sociopolitical analyses for this forum post. I acknowledge a lot of the scenarios I talk about here lean on the pessimistic side. In scenarios where there is high(er) governmental competence and societal resilience (than I predicted), it could be that very few of these x-risk multiplying impacts manifest. It could also be that they manifest in ways I don’t predict initially in this forum post.
I therefore agree with the critique about the overly confident statements. I ended up changing quite a bit of the phrasing in my forum post as a result of your feedback — I absolutely agree that some of the phrasing was a little too certain and bold. The focus should have been more on laying out possibilities rather than statements of what would happen. Thank you for that feedback.
To address your criticism/feedback on IPCC climate reports:
I think it is known that the IPCC’s climate reports, being consensus-driven, will not err in favor of extreme effects but rather include climate effects agreed upon by the broader research community. There was a recent Washington Post article I was considering including as well, where many notable climate scientists comment on the conservative, consensus nature of the IPCC and how this may impact their climate reports.
I cited the Scientific American article initially because it showed evidence of how a conservative consensus-driven organization has historically underestimated climate impacts. The article highlights specific examples of when IPCC predictions have been conservative from 1990 to 2012 — for instance, a 2007 report in which the IPCC dramatically underestimated Arctic summer ice , or a 2001 report where the IPCC predicts of sea level were 40% lower than actual sea level rise.
However, I absolutely acknowledge the accuracy of IPCC reports may have changed since 2012. I agree this evidence is not sufficient to warrant a statement that IPCC climate reports may lean conservative currently — so I’ve modified my statement to emphasize that certain past IPCC reports have leaned conservative. Thank you for the catch.
Overall, I appreciate your feedback — and I hope to speak to you sometime! I’d love to contribute to the research in the future quantifying the sociopolitical impacts of climate change, and I’m particularly interested in the work you do at Founder’s Pledge.
(Note for transparency: This comment has been edited.)
I agree that there should be more focus on resilience (thanks for mentioning ALLFED), and I also agree that we need to consider scenarios where leaders do not respond rationally. You may be aware of Toby Ord’s discussion of existential risk factors in the Precipice, where he roughly estimates a great power war might increase the total existential risk by 10% (page 176). You say:
What is the multiplying impact factor of climate change on x-risks – compared to a world without climate change?
If forced to guess, considering the effects of climate change, I believe a multiplying factor of at least an order of magnitude is conservative. However, further calculations and estimates are absolutely required to verify this.
So you’re saying the impact of climate change is ~90 times as much as his estimate of the impact of great power war (900% increase versus 10% increase in X risk). I think part of the issue is that you believe the world with climate change is significantly worse than the world is now. We agree that the world with climate change is worse than the business as usual, but to claim it is worse than now means that climate change would overwhelm all the economic growth that would have occurred in the next century or so. I think this is hard to defend for expected climate change. But this could be the case for the versions of climate change that ALLFED focuses on, such as the abrupt regional climate change, extreme weather including floods and droughts on multiple continents at the same time causing around a 10% abrupt food production shortfall, or the extreme global climate change of around 6°C or more. Still, I don’t think it is plausible to multiply existential risks such as unaligned AGI or engineered pandemic by 10 because of these climate catastrophes.
For some reason, when writing order of magnitude, I was thinking about existential risks that may have a 0.1% or 1% chance of happening being multiplied into the 1-10% range (e.g. nuclear war). However, I wasn’t considering many of the existential risks I was actually talking about (like biosafety, AI safety, etc) - it’d be ridiculous for AI safety risk to be multiplied from 10% to 100%.
I think the estimate of a great power war increasing the total existential risk by 10% is much more fair than my estimate; because of this, in response to your feedback, I’ve modified my EA forum post to state that a total existential risk increase of 10% is a fair estimate given expected climate politics scenarios, citing Toby Ord’s estimates of existential risk increase under global power conflict.
Thanks a ton for the thoughtful feedback! It is greatly appreciated.
First—thank you for this. I currently research some aspects of resilience & adaptation and am asking myself some critical questions in this area. It also gave me something to build on and respond to, as a nudge to participate for the first time in the Forum, even if my thinking on this is underdeveloped!
On the post itself—I think the biggest contribution here is zooming out in the ending notes, to potential areas for EAs in a climate/longtermism space. What I took away was:
indirect x-risk from climate change is potentially important and neglected
as a group, some EAs interested in climate x longtermism should “pursue research ranking various climate interventions from a climate x-risk perspective”—I may have broadened this out to something like “more holistically assess how climate mitigation and adaptation solutions could indirectly impact total x-risk”
I have a lot of scattered further thoughts on this, but they’re underdeveloped, I’m very uncertain about them, and it’s likely I am missing some key EA literature/thinking already done on this. The central themes are that:
ranking climate problems/interventions for their indirect impacts on total x-risk is less tractable than addressing “direct” x-risk because it would require dealing with complexity i.e. a lot of feedback loops over time (and potentially space)
to my knowledge we (EA, but also humanity) don’t have many formal tools to deal with complexity/feedback loops/emergence, especially not at a global scale with so many different types of flows
there seem to be a lot of skills/attitudes/expertise in the EA community that would make us (as a group) particularly good at developing methodologies to deal with ambiguity/complex problems;
some time could be spent to scope what we can reasonably incorporate into a methodology that could deal with that complexity. The result might be that we decide any methodology aimed at this would require too much effort to do in practice, for the added information it gains (if it gains any), and so we decide dealing with “direct” x-risk only is still the best strategy with updated confidence. The result might also be that we come up with an extra verification that we aren’t missing something substantial when considering only direct risks—this could be as resource-intensive as detailed multi-modelling, or something ‘simpler’ like taking the GCR classification in Table 1 here and describing a set of timelines that test what happens when they interact with each other at a high level
In short I really appreciated the direction of your post! However I was less confident in how you got to those specific scenarios. I think progress in this area could include some standardised approach to generating them, and I think this might be important to establish before we’re able to confidently rank problems/solutions for indirect x-risk.
Again, it’s likely I’m missing key EA thinking/literature on this and I would love for anyone to make recommendations/corrections.
Terribly sorry for the late reply! I didn’t realize I missed replying to this comment.
I appreciate your kind words, and I think your thoughts are very eloquent and ultimately tackle a core epistemic challenge:
to my knowledge we (EA, but also humanity) don’t have many formal tools to deal with complexity/feedback loops/emergence, especially not at a global scale with so many different types of flows … some time could be spent to scope what we can reasonably incorporate into a methodology that could deal with that complexity.
I thought about this for ~4 hours. My current position is that a lot of these claims seem dubious (I doubt many of them would stand up to Fermi estimates), but several people should be working in political stabilization efforts, and it makes sense for at least one of them to be thinking about climate, whether or not this is framed as “climate resilience”. The positive components of the vibe of this post reminded me of SBF’s goals, putting the world in a broadly better place to deal with x-risks.
In particular, I’m skeptical of the pathway from (1) climate change → (2) global extremism and instability → (3) lethal autonomous weapon development → (4) AI x-risk.
First note that this pathway has 4 steps, which is pretty indirect. Looking at each of the steps individually:
(1) → (2): I think experts are mixed on whether resource shortages cause war of the type that can lead to (3). War is a failure of bargaining, so anything that increases war must either shift the game theory or cause decision-makers to become more irrational, not just shrink the pool of available resources. Quoting from the 80k podcast episode with economist / political scientist Chris Blattman:
Rob Wiblin: Yeah. Some other drivers of war that I hear people talk about that you’re skeptical of include climate change and water scarcity. Can you talk about why it is that you’re skeptical of this idea of water wars?
Chris Blattman: So I think scarce water, any scarce resource, is something which we’re going to compete over. If there’s a little bit, we’ll compete over it. If there’s a lot of it, we’ll still probably find a way to compete over it. And the competition is still going to be costly. So we’re always going to strenuously compete. It’ll be hostile, it’ll be bitter, but it shouldn’t be violent. And the fact that water becomes more scarce — like any resource that becomes more scarce — doesn’t take away from the fact that it’s still costly to fight over it. There’s always room for that deal. The fact that our water is shrinking in some places, we have to be skeptical. So what is actually causing this? And then empirically, I think when people take a good look at this and they actually look at all these counterfactual cases where there’s water and war didn’t break out, we just don’t see that water scarcity is a persistent driver of war.
Chris Blattman: The same is a little bit true of climate change. The theory is sort of the same. How things getting hotter or colder affects interpersonal violence is pretty clear, but why it should affect sustained yearslong warfare is far less clear. That said, unlike water wars, the empirical evidence is a little bit stronger that something’s going on. But to me, it’s just then a bit of a puzzle that still needs to be sorted out. Because once again, the fact that we’re getting jostled by unexpected temperature shocks, unexpected weather events, it’s not clear why that should lead to sustained political competition through violence, rather than finding some bargain solution.
(2) → (3): It’s not clear to me that global extremism and instability cause markedly greater investment into lethal autonomous weapons. The US has been using Predator drones constantly since 1995, independently of several shifts in extremism, just because they’re effective; it’s not clear why this would change for more autonomous weapons. More of the variance in autonomous weapon development seems to be from how much attention/funding goes to autonomous weapons as a percentage of world military budgets rather than the overall militarization of the world. As for terrorism, I doubt most terrorist groups have the capacity to develop cutting-edge autonomous weapon technology.
(3) → (4): You write “In the context of AI alignment, often a distinction is drawn between misuse (bad intentions to begin with)and misalignment (good intentions gone awry). However, I believe the greatest risk is the combination of both: a malicious intention to kill an enemy population (misuse), which then slightly misinterprets that mission and perhaps takes it one step further (misalignment into x-risk possibilities).” Given that we currently can’t steer a sufficiently advanced AI system at anything, plus there are sufficient economic pressures to develop goal-directed AGI for other reasons, I disagree that this is the greatest risk.
Each of the links in the chain is reasonable, but the full story seems altogether too long to be a major driver of x-risk. If you have 70% credence in the sign of each step independently, the credence in the 3-step argument goes down to 34%. Maybe you have a lower confidence than the wording implies though.
Hey Thomas! Love the feedback & follow up form the conversation. Thanks for taking so much time to think this over—this is really well-researched. :)
In response to your arguments:
1 → 2 is generally well established by climate literature. I think the quote you provided gives me good reasons for why climate war may not be perfectly rational; however, humans don’t act in a perfectly rational way.
There are clear historical correlations that exist between rainfall patterns and civil tensions, expert opinions on climate causing violent conflict, etc. I’d like to reemphasize that climate conflict is often not just driven by resource scarcity dynamics, but also amplified by the irrational mentalities (e.g. they’ve stolen from us, they hate us, us vs them) that has driven humanity to the state of war for the many decades before. There is a unique blend of rational and irrational calculations that play into conflict risk.
2 → 3 → 4 is absolutely tenuous because our systems have rarely been stressed to this extent, so little to no historical precedence exists. However, this climate tension also acts in non-linear ways with other elements of technological development—e.g. international AGI governance efforts may be significantly harder to do between politically extreme governments and in the context of rising social tension.
To address the “greatest risk” point for 3 → 4, I agree and/or concede because my opinions have changed since the time I’ve written this as I’ve talked to more researchers in the AI alignment space.
From linkchain framing to systems thinking:
This specific 1->2->3->4 pathway causing directly existential risk may feel unlikely—and it is (alone). However, the emphasis I’d like to make is that there is a category of (usually politically-related risks) that have the potential to cascade through systems in a rather dangerous, non-linear, volatile manner.
These systemic cascading risks are better visualized not as a linear linkchain where A affects B affects C affects D (because this only captures one possible linkage chain and no interwoven or cascading effects), but rather as a graph of interconnected socioeconomic systems where one stresses a subset of nodes and studies how this stressor affects the system. How strong the butterfly effect is depends on the vulnerability and resiliency of its institutions; thus, I aim to advocate for more resilient institutions to counter these risks.
I agree that 2 --> 3 --> 4 is tenuous but I think 1 --> 2 is very well-established. The climate-conflict literature is pretty definitive that increases in temperature lead to increases in conflict (see Burke, Hsiang and Miguel 2015) and not just at the small scale. Even under Blattman’s theory, climate --> conflict doesn’t rely on decisionmakers becoming more irrational or uncooperative in any way. It simply relies on them being unable to overcome the tension of resource scarcity with their existing level of cooperation/rationality. A fragile peace bargain can be tipped by shortages, even if it would otherwise have succeeded.
Great post Richard, I can tell some hard work went into this. I found this particularly interesting because I was accepted to Penn’s Landscape Architecture Grad program (though I may not take this up due to lack of funding) - have you thought about connecting with some of the faculty? They’ve produced some interesting work such as this ‘World National Park’ concept.
My understanding is biodiversity losses, freshwater exhaustion, and land system changes are all interrelated anyway. And one of the underlying issues, in my humble opinion, is a dysfunction in humanity’s relationship with nature. As abstract as that sounds, valuing and feeling more connected with nature/environment more broadly may set strong values for preserving environmental/planetary integrity and increasing chances of flourishing—including on other planets should humanity become space-faring species and colonise habitable planets.
Thanks a ton Darren! I’d love to connect with you — and I found the ideas you linked to interesting. Thanks for introducing me to these ideas.
I completely agree with you — I think I ended up focusing on climate change specifically because it is the most clear, well-studied manifestation of “Earth Systems Health” gone wrong and potentially causing existential risk. However, emphasizing a broader need to preserve the stability of Earth’s systems is extremely valuable — and encompasses climate change.
Reducing greenhouse gas emissions may be the most important issue currently, but given our current societal inability to interface with our environment in a way that doesn’t damage it, there may be many other environmental crises in the future that manifest as well that damage our ability to survive. A broader framework encompassing environmental preservation may be necessary to address all of these issues at once.
Acknowledgements to Esban Kran, Stian Grønlund, Liam Alexander, Pablo Rosado, Sebastian Engen, and many others for providing feedback and connecting me with helpful resources while I was writing this forum post. :-)
I think it’s really important to look at the underlying assumptions of any long-term EA project, and the movement might not be doing this enough. We take as way too obvious that the social and political climate we’re currently operating in will stay the same. But in reality, everything could change significantly due to things like climate change (in one direction) or economic growth (in the other).
Thanks a ton for your comment! I’m planning to write a follow-up EA forum post on cascading and interlinking effects—and I agree with you in that I think a lot of times, EA frameworks only take into account first-order impacts while assuming linearity between cause areas.
Thanks for this!
FWIW, Founders Pledge’s climate work is explicitly focused on supporting solutions that work in the worst worlds (minimizing expected climate damage) and we’re thinking a lot about many of those issues both from a solution angle and a cause prioritization angle (I think the existential risk factors you allude to are by far the most important reasons to care about climate from a longtermist lens).
That being said, you are making a lot of very strong claims based on fairly limited evidence and it would require significantly more work to get credible estimates on the various risk pathways and to justify confident statements about large effect sizes. It also seems that the sources do not fully support the claims, e.g. I clicked on the link to justify the estimate for IPCC underestimating climate impacts which led me to a study which is 10 years old, i.e. essentially carries 0 information for the current state on that question. So I’d be careful to jump to quite extreme (in the sense of confident) conclusions.
Given the vast uncertainties and the heterogeneity of climate-relevant science one can justify almost any conclusion from cherry-picking a subset from the peer-reviewed literature, so it’s really crucial to consider review papers and a wide range of estimates before making bold claims.
Hey Johannes! I really appreciate the feedback, and I love the work you guys are doing through Founder’s Pledge. I appreciate that you also believe sociopolitical existential risk factors are an important element worth consideration.
I wish there was a lot more quantitative evidence on sociopolitical climate risk — I had to lean to a lot of qualitative expert sociopolitical analyses for this forum post. I acknowledge a lot of the scenarios I talk about here lean on the pessimistic side. In scenarios where there is high(er) governmental competence and societal resilience (than I predicted), it could be that very few of these x-risk multiplying impacts manifest. It could also be that they manifest in ways I don’t predict initially in this forum post.
I therefore agree with the critique about the overly confident statements. I ended up changing quite a bit of the phrasing in my forum post as a result of your feedback — I absolutely agree that some of the phrasing was a little too certain and bold. The focus should have been more on laying out possibilities rather than statements of what would happen. Thank you for that feedback.
To address your criticism/feedback on IPCC climate reports:
I think it is known that the IPCC’s climate reports, being consensus-driven, will not err in favor of extreme effects but rather include climate effects agreed upon by the broader research community. There was a recent Washington Post article I was considering including as well, where many notable climate scientists comment on the conservative, consensus nature of the IPCC and how this may impact their climate reports.
I cited the Scientific American article initially because it showed evidence of how a conservative consensus-driven organization has historically underestimated climate impacts. The article highlights specific examples of when IPCC predictions have been conservative from 1990 to 2012 — for instance, a 2007 report in which the IPCC dramatically underestimated Arctic summer ice , or a 2001 report where the IPCC predicts of sea level were 40% lower than actual sea level rise.
However, I absolutely acknowledge the accuracy of IPCC reports may have changed since 2012. I agree this evidence is not sufficient to warrant a statement that IPCC climate reports may lean conservative currently — so I’ve modified my statement to emphasize that certain past IPCC reports have leaned conservative. Thank you for the catch.
Overall, I appreciate your feedback — and I hope to speak to you sometime! I’d love to contribute to the research in the future quantifying the sociopolitical impacts of climate change, and I’m particularly interested in the work you do at Founder’s Pledge.
(Note for transparency: This comment has been edited.)
I agree that there should be more focus on resilience (thanks for mentioning ALLFED), and I also agree that we need to consider scenarios where leaders do not respond rationally. You may be aware of Toby Ord’s discussion of existential risk factors in the Precipice, where he roughly estimates a great power war might increase the total existential risk by 10% (page 176). You say:
So you’re saying the impact of climate change is ~90 times as much as his estimate of the impact of great power war (900% increase versus 10% increase in X risk). I think part of the issue is that you believe the world with climate change is significantly worse than the world is now. We agree that the world with climate change is worse than the business as usual, but to claim it is worse than now means that climate change would overwhelm all the economic growth that would have occurred in the next century or so. I think this is hard to defend for expected climate change. But this could be the case for the versions of climate change that ALLFED focuses on, such as the abrupt regional climate change, extreme weather including floods and droughts on multiple continents at the same time causing around a 10% abrupt food production shortfall, or the extreme global climate change of around 6°C or more. Still, I don’t think it is plausible to multiply existential risks such as unaligned AGI or engineered pandemic by 10 because of these climate catastrophes.
This is very fair criticism and I agree.
For some reason, when writing order of magnitude, I was thinking about existential risks that may have a 0.1% or 1% chance of happening being multiplied into the 1-10% range (e.g. nuclear war). However, I wasn’t considering many of the existential risks I was actually talking about (like biosafety, AI safety, etc) - it’d be ridiculous for AI safety risk to be multiplied from 10% to 100%.
I think the estimate of a great power war increasing the total existential risk by 10% is much more fair than my estimate; because of this, in response to your feedback, I’ve modified my EA forum post to state that a total existential risk increase of 10% is a fair estimate given expected climate politics scenarios, citing Toby Ord’s estimates of existential risk increase under global power conflict.
Thanks a ton for the thoughtful feedback! It is greatly appreciated.
First—thank you for this. I currently research some aspects of resilience & adaptation and am asking myself some critical questions in this area. It also gave me something to build on and respond to, as a nudge to participate for the first time in the Forum, even if my thinking on this is underdeveloped!
On the post itself—I think the biggest contribution here is zooming out in the ending notes, to potential areas for EAs in a climate/longtermism space. What I took away was:
indirect x-risk from climate change is potentially important and neglected
as a group, some EAs interested in climate x longtermism should “pursue research ranking various climate interventions from a climate x-risk perspective”—I may have broadened this out to something like “more holistically assess how climate mitigation and adaptation solutions could indirectly impact total x-risk”
I have a lot of scattered further thoughts on this, but they’re underdeveloped, I’m very uncertain about them, and it’s likely I am missing some key EA literature/thinking already done on this. The central themes are that:
ranking climate problems/interventions for their indirect impacts on total x-risk is less tractable than addressing “direct” x-risk because it would require dealing with complexity i.e. a lot of feedback loops over time (and potentially space)
to my knowledge we (EA, but also humanity) don’t have many formal tools to deal with complexity/feedback loops/emergence, especially not at a global scale with so many different types of flows
there seem to be a lot of skills/attitudes/expertise in the EA community that would make us (as a group) particularly good at developing methodologies to deal with ambiguity/complex problems;
some time could be spent to scope what we can reasonably incorporate into a methodology that could deal with that complexity. The result might be that we decide any methodology aimed at this would require too much effort to do in practice, for the added information it gains (if it gains any), and so we decide dealing with “direct” x-risk only is still the best strategy with updated confidence. The result might also be that we come up with an extra verification that we aren’t missing something substantial when considering only direct risks—this could be as resource-intensive as detailed multi-modelling, or something ‘simpler’ like taking the GCR classification in Table 1 here and describing a set of timelines that test what happens when they interact with each other at a high level
In short I really appreciated the direction of your post! However I was less confident in how you got to those specific scenarios. I think progress in this area could include some standardised approach to generating them, and I think this might be important to establish before we’re able to confidently rank problems/solutions for indirect x-risk. Again, it’s likely I’m missing key EA thinking/literature on this and I would love for anyone to make recommendations/corrections.
Terribly sorry for the late reply! I didn’t realize I missed replying to this comment.
I appreciate your kind words, and I think your thoughts are very eloquent and ultimately tackle a core epistemic challenge:
I recently wrote a new forum post on a framework/phrase I used tying together concepts from complexity science & EA, arguing that it can be used to provide tractable resilience-based solutions to complexity problems.
I thought about this for ~4 hours. My current position is that a lot of these claims seem dubious (I doubt many of them would stand up to Fermi estimates), but several people should be working in political stabilization efforts, and it makes sense for at least one of them to be thinking about climate, whether or not this is framed as “climate resilience”. The positive components of the vibe of this post reminded me of SBF’s goals, putting the world in a broadly better place to deal with x-risks.
In particular, I’m skeptical of the pathway from (1) climate change → (2) global extremism and instability → (3) lethal autonomous weapon development → (4) AI x-risk.
First note that this pathway has 4 steps, which is pretty indirect. Looking at each of the steps individually:
(1) → (2): I think experts are mixed on whether resource shortages cause war of the type that can lead to (3). War is a failure of bargaining, so anything that increases war must either shift the game theory or cause decision-makers to become more irrational, not just shrink the pool of available resources. Quoting from the 80k podcast episode with economist / political scientist Chris Blattman:
(2) → (3): It’s not clear to me that global extremism and instability cause markedly greater investment into lethal autonomous weapons. The US has been using Predator drones constantly since 1995, independently of several shifts in extremism, just because they’re effective; it’s not clear why this would change for more autonomous weapons. More of the variance in autonomous weapon development seems to be from how much attention/funding goes to autonomous weapons as a percentage of world military budgets rather than the overall militarization of the world. As for terrorism, I doubt most terrorist groups have the capacity to develop cutting-edge autonomous weapon technology.
(3) → (4): You write “In the context of AI alignment, often a distinction is drawn between misuse (bad intentions to begin with) and misalignment (good intentions gone awry). However, I believe the greatest risk is the combination of both: a malicious intention to kill an enemy population (misuse), which then slightly misinterprets that mission and perhaps takes it one step further (misalignment into x-risk possibilities).” Given that we currently can’t steer a sufficiently advanced AI system at anything, plus there are sufficient economic pressures to develop goal-directed AGI for other reasons, I disagree that this is the greatest risk.
Each of the links in the chain is reasonable, but the full story seems altogether too long to be a major driver of x-risk. If you have 70% credence in the sign of each step independently, the credence in the 3-step argument goes down to 34%. Maybe you have a lower confidence than the wording implies though.
Hey Thomas! Love the feedback & follow up form the conversation. Thanks for taking so much time to think this over—this is really well-researched. :)
In response to your arguments:
1 → 2 is generally well established by climate literature. I think the quote you provided gives me good reasons for why climate war may not be perfectly rational; however, humans don’t act in a perfectly rational way.
There are clear historical correlations that exist between rainfall patterns and civil tensions, expert opinions on climate causing violent conflict, etc. I’d like to reemphasize that climate conflict is often not just driven by resource scarcity dynamics, but also amplified by the irrational mentalities (e.g. they’ve stolen from us, they hate us, us vs them) that has driven humanity to the state of war for the many decades before. There is a unique blend of rational and irrational calculations that play into conflict risk.
2 → 3 → 4 is absolutely tenuous because our systems have rarely been stressed to this extent, so little to no historical precedence exists. However, this climate tension also acts in non-linear ways with other elements of technological development—e.g. international AGI governance efforts may be significantly harder to do between politically extreme governments and in the context of rising social tension.
To address the “greatest risk” point for 3 → 4, I agree and/or concede because my opinions have changed since the time I’ve written this as I’ve talked to more researchers in the AI alignment space.
From linkchain framing to systems thinking:
This specific 1->2->3->4 pathway causing directly existential risk may feel unlikely—and it is (alone). However, the emphasis I’d like to make is that there is a category of (usually politically-related risks) that have the potential to cascade through systems in a rather dangerous, non-linear, volatile manner.
These systemic cascading risks are better visualized not as a linear linkchain where A affects B affects C affects D (because this only captures one possible linkage chain and no interwoven or cascading effects), but rather as a graph of interconnected socioeconomic systems where one stresses a subset of nodes and studies how this stressor affects the system. How strong the butterfly effect is depends on the vulnerability and resiliency of its institutions; thus, I aim to advocate for more resilient institutions to counter these risks.
I agree that 2 --> 3 --> 4 is tenuous but I think 1 --> 2 is very well-established. The climate-conflict literature is pretty definitive that increases in temperature lead to increases in conflict (see Burke, Hsiang and Miguel 2015) and not just at the small scale. Even under Blattman’s theory, climate --> conflict doesn’t rely on decisionmakers becoming more irrational or uncooperative in any way. It simply relies on them being unable to overcome the tension of resource scarcity with their existing level of cooperation/rationality. A fragile peace bargain can be tipped by shortages, even if it would otherwise have succeeded.
Great post Richard, I can tell some hard work went into this. I found this particularly interesting because I was accepted to Penn’s Landscape Architecture Grad program (though I may not take this up due to lack of funding) - have you thought about connecting with some of the faculty? They’ve produced some interesting work such as this ‘World National Park’ concept.
I wonder if one solution is removing the bounding of just ‘climate change’ and instead expand things to Earth Systems Health/Integrity more broadly, perhaps using the Planetary Boundaries framework? https://www.stockholmresilience.org/research/planetary-boundaries/the-nine-planetary-boundaries.html
My understanding is biodiversity losses, freshwater exhaustion, and land system changes are all interrelated anyway. And one of the underlying issues, in my humble opinion, is a dysfunction in humanity’s relationship with nature. As abstract as that sounds, valuing and feeling more connected with nature/environment more broadly may set strong values for preserving environmental/planetary integrity and increasing chances of flourishing—including on other planets should humanity become space-faring species and colonise habitable planets.
Thanks a ton Darren! I’d love to connect with you — and I found the ideas you linked to interesting. Thanks for introducing me to these ideas.
I completely agree with you — I think I ended up focusing on climate change specifically because it is the most clear, well-studied manifestation of “Earth Systems Health” gone wrong and potentially causing existential risk. However, emphasizing a broader need to preserve the stability of Earth’s systems is extremely valuable — and encompasses climate change.
Reducing greenhouse gas emissions may be the most important issue currently, but given our current societal inability to interface with our environment in a way that doesn’t damage it, there may be many other environmental crises in the future that manifest as well that damage our ability to survive. A broader framework encompassing environmental preservation may be necessary to address all of these issues at once.
This paper on Assessing climate change’s contribution to global catastrophic risk uses the planetary boundaries framework! And this paper on Classifying global catastrophic risks might also be of interest :)
Acknowledgements to Esban Kran, Stian Grønlund, Liam Alexander, Pablo Rosado, Sebastian Engen, and many others for providing feedback and connecting me with helpful resources while I was writing this forum post. :-)
Thanks for this post!
I think it’s really important to look at the underlying assumptions of any long-term EA project, and the movement might not be doing this enough. We take as way too obvious that the social and political climate we’re currently operating in will stay the same. But in reality, everything could change significantly due to things like climate change (in one direction) or economic growth (in the other).
Thanks a ton for your comment! I’m planning to write a follow-up EA forum post on cascading and interlinking effects—and I agree with you in that I think a lot of times, EA frameworks only take into account first-order impacts while assuming linearity between cause areas.