I want to add another dimension which underlies a huge amount of implicit disagreement: climate change impact timelines versus technological growth and AI / other technology takeoff. In order to see climate as a critical threat, you need the other key sources of risk to be much farther away than we currently expect.
To explain, it seems likely that most of the severe climate impact is post-2050, perhaps closer to 2070, or even later i.e. likely occurring well after we have passed peak emissions, but before we manage net-negative emissions. But if we have managed to build AGI, powerful nanotech, or nearly-arbitrarily-flexible synthetic biology by then, which is likely, we seem to have only two possibilities—either we’re screwed because those technologies go wrong, or fixing climate is far easier because we can achieve an arbitrary level of atmospheric CO2, perhaps via automated AI labor building carbon reduction and carbon capture installations, or via nano- or bio-tech capture of atmospheric CO2. Collapse due to warming is therefore an existential threat only if nothing significant changes in our technological capabilities. But longtermist EAs have spent years explicitly arguing that this is between somewhat and incredibly unlikely.
Still, I think slow technological progress has non-trivial probability. We should cover all our bases, and ensure we don’t screw up the climate. But given the lack of neglectedness, I rely on our (thoroughly mediocre but only mostly inadequate) civilization to address the problem, unfortunately far more slowly than would be smart, with catastrophic but not existentially threatening impacts in the coming decades due to our delay in fixing the problem. In the meantime, I’m going to support CO2 mitigation, and thank everyone who is working on this very important area—but still focus my altruistic energy dedicated to maximizing impact elsewhere.
Thanks for this, upvoted! I agree with you that timelines seem like a really important angle that I neglected in the post—I don’t have a fully formed opinion about this yet but will think about it some more.
I want to add another dimension which underlies a huge amount of implicit disagreement: climate change impact timelines versus technological growth and AI / other technology takeoff. In order to see climate as a critical threat, you need the other key sources of risk to be much farther away than we currently expect.
To explain, it seems likely that most of the severe climate impact is post-2050, perhaps closer to 2070, or even later i.e. likely occurring well after we have passed peak emissions, but before we manage net-negative emissions. But if we have managed to build AGI, powerful nanotech, or nearly-arbitrarily-flexible synthetic biology by then, which is likely, we seem to have only two possibilities—either we’re screwed because those technologies go wrong, or fixing climate is far easier because we can achieve an arbitrary level of atmospheric CO2, perhaps via automated AI labor building carbon reduction and carbon capture installations, or via nano- or bio-tech capture of atmospheric CO2. Collapse due to warming is therefore an existential threat only if nothing significant changes in our technological capabilities. But longtermist EAs have spent years explicitly arguing that this is between somewhat and incredibly unlikely.
Still, I think slow technological progress has non-trivial probability. We should cover all our bases, and ensure we don’t screw up the climate. But given the lack of neglectedness, I rely on our (thoroughly mediocre but only mostly inadequate) civilization to address the problem, unfortunately far more slowly than would be smart, with catastrophic but not existentially threatening impacts in the coming decades due to our delay in fixing the problem. In the meantime, I’m going to support CO2 mitigation, and thank everyone who is working on this very important area—but still focus my altruistic energy dedicated to maximizing impact elsewhere.
Thanks for this, upvoted! I agree with you that timelines seem like a really important angle that I neglected in the post—I don’t have a fully formed opinion about this yet but will think about it some more.