I think you’re basically right in your points, but they are not enough to say that climate change is nearly as bad as biorisk or AI misalignment. You may get close to nuclear risk, but I’m skeptical of that as well. My main point is that extinction from climate is much more speculative than from the other causes.
Reasons:
There is some risk of a runaway climate change. However, this risk seems small according to GWWC’s article + it would be overconfident to say that humanity can’t protect itself against it with future technology. There is also much more time left until we get to > 5° of warming than until the risk of engineered pathogens and powerful AI rises quickly.
Climate change will be very destabilizing. However, it’s very hard to predict the long-term consequences of this, so if you’re motivated by a longtermist framework, you should focus on tackling the more plausible risks of engineered pathogens and misaligned AI more directly. One caveat here is the perspective of cascading risks, which EA is not taking very seriously at the moment.
The impacts on life quality are not convincing from a longtermist standpoint as I expect them to last much less than 1000 years whereas humanity and its descendants could live for billions of years. I also expect only a tiny fraction of future sentiences to live on earth.
Another thought I often miss in debates on x-risk from climate change is that humans would likely intervene in climate at some stage if it’s a serious threat to our economies and even lives. I haven’t seen anyone make this point before, but please point me to sources.
If you are still new to EA, you may understand the current position better as you learn more about the pressingness of biorisk and especially AI risk. That said, there is probably room for some funding for climate change from a longtermist perspective, and given the uncertainty surrounding cascading risks, I’d be happy to see a small fraction of longtermist resources directed to this problem.
Thank you, Konstantin, for a closely argued response. I agree with much of what you say (though I would stretch much longer the 1000 years figure). Any disagreement with your conclusion (“there is probably room for some funding for climate change from a longtermist perspective, … I’d be happy to see a small fraction of longtermist resources directed to this problem”) may pertain only to numbers—to the exact size of the “small fraction”. I agree, specifically, that it TENDS to be MUCH more urgent to fund AI safety and biosecurity work, from a longtermist perspective. Remember that I ENDORSE the “admittedly greater extinction potential and scantier funding of some other existential risks as broad categories”...
Your point about what one may call the potential reversibility of climate change or of its worst sequelae is definitely worth developing. I have discussed it with others but haven’t seen it developed at length in writing. Sometimes it is what longtermists seem to mean when they write that climate change is not a neglected area. But analytically it is separate from e.g. the claim that others are already on the case of curbing concurrent emissions (which are therefore not a neglected area). A related challenge for you: The potential reversibility of a long-term risk is not only a reason to prioritize the prevention of other risks, the onset of which is irreversible and hence more calamitous, over preventing that risk. It is also a reason to prioritize one area of work on that risk, namely, its effective reversal. Indeed, when I wrote that longtermists should invest in geoengineering, I had in mind primarily strategies like carbon capture, which could be seen as reversing some harms of our greenhouse gas emissions.
I think you’re basically right in your points, but they are not enough to say that climate change is nearly as bad as biorisk or AI misalignment. You may get close to nuclear risk, but I’m skeptical of that as well. My main point is that extinction from climate is much more speculative than from the other causes.
Reasons:
There is some risk of a runaway climate change. However, this risk seems small according to GWWC’s article + it would be overconfident to say that humanity can’t protect itself against it with future technology. There is also much more time left until we get to > 5° of warming than until the risk of engineered pathogens and powerful AI rises quickly.
Climate change will be very destabilizing. However, it’s very hard to predict the long-term consequences of this, so if you’re motivated by a longtermist framework, you should focus on tackling the more plausible risks of engineered pathogens and misaligned AI more directly. One caveat here is the perspective of cascading risks, which EA is not taking very seriously at the moment.
The impacts on life quality are not convincing from a longtermist standpoint as I expect them to last much less than 1000 years whereas humanity and its descendants could live for billions of years. I also expect only a tiny fraction of future sentiences to live on earth.
Another thought I often miss in debates on x-risk from climate change is that humans would likely intervene in climate at some stage if it’s a serious threat to our economies and even lives. I haven’t seen anyone make this point before, but please point me to sources.
If you are still new to EA, you may understand the current position better as you learn more about the pressingness of biorisk and especially AI risk. That said, there is probably room for some funding for climate change from a longtermist perspective, and given the uncertainty surrounding cascading risks, I’d be happy to see a small fraction of longtermist resources directed to this problem.
Thank you, Konstantin, for a closely argued response. I agree with much of what you say (though I would stretch much longer the 1000 years figure). Any disagreement with your conclusion (“there is probably room for some funding for climate change from a longtermist perspective, … I’d be happy to see a small fraction of longtermist resources directed to this problem”) may pertain only to numbers—to the exact size of the “small fraction”. I agree, specifically, that it TENDS to be MUCH more urgent to fund AI safety and biosecurity work, from a longtermist perspective. Remember that I ENDORSE the “admittedly greater extinction potential and scantier funding of some other existential risks as broad categories”...
Your point about what one may call the potential reversibility of climate change or of its worst sequelae is definitely worth developing. I have discussed it with others but haven’t seen it developed at length in writing. Sometimes it is what longtermists seem to mean when they write that climate change is not a neglected area. But analytically it is separate from e.g. the claim that others are already on the case of curbing concurrent emissions (which are therefore not a neglected area). A related challenge for you: The potential reversibility of a long-term risk is not only a reason to prioritize the prevention of other risks, the onset of which is irreversible and hence more calamitous, over preventing that risk. It is also a reason to prioritize one area of work on that risk, namely, its effective reversal. Indeed, when I wrote that longtermists should invest in geoengineering, I had in mind primarily strategies like carbon capture, which could be seen as reversing some harms of our greenhouse gas emissions.
Nir