Hey there, interesting article! In this talk from the most recent EA Global, Niel Bowerman (climate physics PhD and now AI specialist at 80,000 Hours) gives some thoughts on the relationship between climate change and existential risk. Essentially I think that there’s some evidence about point 2 on your list.
In his talk, Niel argues that climate change could cause human extinction in itself, under some scenarios. These are quite unlikely, but have non-zero probabilities. When we consider that emissions are likely to increase well beyond 2100, beware the 2100 fallacy of cutting shorts impact analyses at an arbitrary point in time.
The larger contributions very roughly are probably from climate change contributing to social collapse and conflict, which themselves lead to existential risks. Toby Ord has called this an ‘existential risk factor’. I think the question isn’t “Is climate an existential risk?” but “Does climate change contribute to existential risk?” in which case, it seems that the sign might be yes. Or perhaps “Is climate change important in the long-term?” in which case, if we’re thinking across multiple centuries, even with lots of technological development, if we’re looking at >6C in 2300 (to pick an example), then I think the answer is yes.
All of this being said, I still think it’s a fair to argue that AI, bio, and nuclear are more neglected and tractable relative to climate change.
What do you think of Niel’s talk and this framing?
(Commenting before watching the video) I think you’re actually understating how much Niel knows:
During the 2008 presidential election Niel was a member of President Obama’s Energy and Environment Policy Team. Niel sat on the Executive Committee of the G8 Research Group: UK, was the Executive Director of Climatico, and was Climate Science Advisor to the Office of the President of the Maldives.
I also recall him doing some climate stuff for the UN, but it wasn’t listed there. In light of that, I think I’m pretty comfortable just outside-viewing and differing to him on model uncertainty and extreme low-probability outcomes.
Or perhaps “Is climate change important in the long-term?” in which case, if we’re thinking across multiple centuries, even with lots of technological development, if we’re looking at >6C in 2300 (to pick an example), then I think the answer is yes.
I don’t know, I think even if temperatures rise by 10C, humanity will still survive, but yes, then climate change will be very very important.
Re timing: I picked 2100 because that’s where most forecasts I see are, but I think it’s defensible because I have an inside-view that things will most likely get crazy for other reasons by then. I think this is outside-view defensible because world GDP has doubled every ~20 years since 1950, so very naively we might expect to see a world that’s 16x(!) richer by 2100. Even if we don’t speculate on specific technologies, it seems hard to imagine a 16x richer world that isn’t meaningfully different from our own in hard to predict and hard to plan for ways.
All of this being said, I still think it’s a fair to argue that AI, bio, and nuclear are more neglected and tractable relative to climate change.
I agree about neglectedness. I’m agnostic towards whether climate change mitigation is more tractable than biosecurity or AGI safety.
Hey there, interesting article! In this talk from the most recent EA Global, Niel Bowerman (climate physics PhD and now AI specialist at 80,000 Hours) gives some thoughts on the relationship between climate change and existential risk. Essentially I think that there’s some evidence about point 2 on your list.
In his talk, Niel argues that climate change could cause human extinction in itself, under some scenarios. These are quite unlikely, but have non-zero probabilities. When we consider that emissions are likely to increase well beyond 2100, beware the 2100 fallacy of cutting shorts impact analyses at an arbitrary point in time.
The larger contributions very roughly are probably from climate change contributing to social collapse and conflict, which themselves lead to existential risks. Toby Ord has called this an ‘existential risk factor’. I think the question isn’t “Is climate an existential risk?” but “Does climate change contribute to existential risk?” in which case, it seems that the sign might be yes. Or perhaps “Is climate change important in the long-term?” in which case, if we’re thinking across multiple centuries, even with lots of technological development, if we’re looking at >6C in 2300 (to pick an example), then I think the answer is yes.
All of this being said, I still think it’s a fair to argue that AI, bio, and nuclear are more neglected and tractable relative to climate change.
What do you think of Niel’s talk and this framing?
(Commenting before watching the video) I think you’re actually understating how much Niel knows:
From https://www.fhi.ox.ac.uk/team/niel-bowerman/
I also recall him doing some climate stuff for the UN, but it wasn’t listed there. In light of that, I think I’m pretty comfortable just outside-viewing and differing to him on model uncertainty and extreme low-probability outcomes.
I don’t know, I think even if temperatures rise by 10C, humanity will still survive, but yes, then climate change will be very very important.
Re timing: I picked 2100 because that’s where most forecasts I see are, but I think it’s defensible because I have an inside-view that things will most likely get crazy for other reasons by then. I think this is outside-view defensible because world GDP has doubled every ~20 years since 1950, so very naively we might expect to see a world that’s 16x(!) richer by 2100. Even if we don’t speculate on specific technologies, it seems hard to imagine a 16x richer world that isn’t meaningfully different from our own in hard to predict and hard to plan for ways.
I agree about neglectedness. I’m agnostic towards whether climate change mitigation is more tractable than biosecurity or AGI safety.