In practice, to most people, existential (everyone dies) and civilizational (some people survive, but they’re in a Mad Max post-apocalyptic state) risks are both so awful that they count as negative infinity, and warrant equal effort to be averted.
So I think this is a very reasonable position to have. I think it’s the type of position that should lead someone to be comparatively much less interested in the “biology can’t kill everyone”-style arguments, and comparatively more concerned about biorisk and AI misuse risk compared to AGI takeover risk. Depends on the details of the collapse[1] and what counts as “negative infinity”, you might also be substantially more concerned about nuclear risk as well.
But I don’t see a case for climate change risk specifically approaching anywhere near those levels, especially on timescales less than 100 years or so. My understanding is that the academic consensus on climate change is very far from it being a near-term(or medium-term) civilizational collapse risk, and when academic climate economists argue about the damage function, the boundaries of debate are on the order of percentage points[2] of GDP. Which is terrible, sure, and arguably qualify as a GCR, but pretty far away from a Mad Max apocalyptic state[3]. So on the object-level, such claims will literally be wrong. That said, I think the wording of the CAIS statement “societal-scale risks such as …” is broad enough to be inclusive of climate change, so someone editing that statement to include climate change won’t directly be lying by my lights.
I’m going to be real, I don’t trust much the rationality of anyone who right now believes that climate change is straight up fake, as some do—that is a position patently divorced from reality.
I’m often tempted to have views like this. But as my friend roughly puts it, “once you apply the standard of ‘good person’ to people you interact with, you’d soon find yourself without any allies, friends, employers, or idols.”
There are many commonly-held views that I think are either divorced from reality or morally incompetent. Some people think AI risk isn’t real. Some (actually, most) people think there are literal God(s). Some people think there is no chance that chickens are conscious. Some people think chickens are probably conscious but it’s acceptable to torture them for food anyway. Some people think vaccines cause autism. Some people oppose Human-Challenge Trials. Some people think it’s immoral to do cost-effectiveness estimates to evaluate charities. Some people think climate change poses an extinction-level threat in 30 years. Some people think it’s acceptable to value citizens >1000x the value of foreigners. Some people think John Rawls is internally consistent. Some people have strong and open racial prejudices. Some people have secret prejudices that they don’t display but drives much of their professional and private lives. Some people think good internet discourse practices includes randomly calling other people racist, or Nazis. Some people think evolution is fake. Some people believe in fan death.
And these are just viewpoints, when arguably it’s more important to do good actions than to have good opinions. Even though I’m often tempted to want to only interact or work with non-terrible people, in terms of practical political coalition-building, I suspect the only way to get things done is by being willing to work with fairly terrible (by my lights) people, while perhaps still being willing to exclude extremely terrible people. Our starting point is the crooked timbers of humanity, the trick is creating the right incentive structures and/or memes and/or coalitions to build something great or at least acceptable.
ie is it really civilizational collapse if it’s something that affects the Northern hemisphere massively but results in Australia and South America not having >50% reduction in standard of living? Reasonable people can disagree I think.
World GDP per capita was 50% lower in 2000, and I think most places in 2000 did not resemble a post-apocalyptic state, with the exception of a few failed states.
But I don’t see a case for climate change risk specifically approaching anywhere near those levels, especially on timescales less than 100 years or so.
I think the thing with climate change is that unlike those other things it’s not just a vague possibility, it’s a certainty. The uncertainty lies in the precise entity of the risk. At the higher end of warming it gets damn well dangerous (not to mention, it can be the trigger for other crises, e.g. imagine India suffering from killer heatwaves leading to additional friction with Pakistan, both nuclear powers). So it’s a baseline of merely “a lot dead people, a lot lost wealth, a lot things to somehow fix or repair”, and then the tail outcomes are potentially much much worse. They’re considered unlikely but of course we may have overlooked a feedback loop or tipping point too much. I honestly don’t feel as confident that climate change isn’t a big risk to our civilization when it’s likely to stress multiple infrastructures at once (mainly, food supply combined with a need to change our energy usage combined with a need to provide more AC and refrigeration as a matter of survival in some regions combined with sea levels rising which may eat on valuable land and cities).
I’m often tempted to have views like this. But as my friend roughly puts it, “once you apply the standard of ‘good person’ to people you interact with, you’d soon find yourself without any allies, friends, employers, or idols.”
I’m not saying “these people are evil and irredeemable, ignore them”. But I’m saying they are being fundamentally irrational about it. “You can’t reason a person out of a position they didn’t reason themselves in”. In other words, I don’t think it’s worth worrying about not mentioning climate change merely for the sake of not alienating them when the result is it will alienate many more people on other sides of the spectrum. Besides, those among those people who think like you might also go “oh well these guys are wrong about climate change but I can’t hold it against them since they had to put together a compromise statement”. I think as of now many minimizing attitudes towards AI risk are also irrational, but it’s still a much newer topic and a more speculative one, with less evidence behind it. I think people might still be in the “figuring things out” stage for that, while for climate change, opinions are very much fossilized, and in some cases determined by things other than rational evaluation of the evidence. Basically, I think in this specific circumstance, there is no way of being neutral: either mentioning or not mentioning climate change gets read as a signal. You can only pick which side of the issue to stand on, and if you think you have a better shot with people who ground their thinking in evidence, then the side that believes climate change is real has more of those.
Thanks for engaging.
So I think this is a very reasonable position to have. I think it’s the type of position that should lead someone to be comparatively much less interested in the “biology can’t kill everyone”-style arguments, and comparatively more concerned about biorisk and AI misuse risk compared to AGI takeover risk. Depends on the details of the collapse[1] and what counts as “negative infinity”, you might also be substantially more concerned about nuclear risk as well.
But I don’t see a case for climate change risk specifically approaching anywhere near those levels, especially on timescales less than 100 years or so. My understanding is that the academic consensus on climate change is very far from it being a near-term(or medium-term) civilizational collapse risk, and when academic climate economists argue about the damage function, the boundaries of debate are on the order of percentage points[2] of GDP. Which is terrible, sure, and arguably qualify as a GCR, but pretty far away from a Mad Max apocalyptic state[3]. So on the object-level, such claims will literally be wrong. That said, I think the wording of the CAIS statement “societal-scale risks such as …” is broad enough to be inclusive of climate change, so someone editing that statement to include climate change won’t directly be lying by my lights.
I’m often tempted to have views like this. But as my friend roughly puts it, “once you apply the standard of ‘good person’ to people you interact with, you’d soon find yourself without any allies, friends, employers, or idols.”
There are many commonly-held views that I think are either divorced from reality or morally incompetent. Some people think AI risk isn’t real. Some (actually, most) people think there are literal God(s). Some people think there is no chance that chickens are conscious. Some people think chickens are probably conscious but it’s acceptable to torture them for food anyway. Some people think vaccines cause autism. Some people oppose Human-Challenge Trials. Some people think it’s immoral to do cost-effectiveness estimates to evaluate charities. Some people think climate change poses an extinction-level threat in 30 years. Some people think it’s acceptable to value citizens >1000x the value of foreigners. Some people think John Rawls is internally consistent. Some people have strong and open racial prejudices. Some people have secret prejudices that they don’t display but drives much of their professional and private lives. Some people think good internet discourse practices includes randomly calling other people racist, or Nazis. Some people think evolution is fake. Some people believe in fan death.
And these are just viewpoints, when arguably it’s more important to do good actions than to have good opinions. Even though I’m often tempted to want to only interact or work with non-terrible people, in terms of practical political coalition-building, I suspect the only way to get things done is by being willing to work with fairly terrible (by my lights) people, while perhaps still being willing to exclude extremely terrible people. Our starting point is the crooked timbers of humanity, the trick is creating the right incentive structures and/or memes and/or coalitions to build something great or at least acceptable.
ie is it really civilizational collapse if it’s something that affects the Northern hemisphere massively but results in Australia and South America not having >50% reduction in standard of living? Reasonable people can disagree I think.
Maybe occasionally low tens of percentage points? I haven’t seen anything that suggests this, but I’m not well-versed in the literature here.
World GDP per capita was 50% lower in 2000, and I think most places in 2000 did not resemble a post-apocalyptic state, with the exception of a few failed states.
I think the thing with climate change is that unlike those other things it’s not just a vague possibility, it’s a certainty. The uncertainty lies in the precise entity of the risk. At the higher end of warming it gets damn well dangerous (not to mention, it can be the trigger for other crises, e.g. imagine India suffering from killer heatwaves leading to additional friction with Pakistan, both nuclear powers). So it’s a baseline of merely “a lot dead people, a lot lost wealth, a lot things to somehow fix or repair”, and then the tail outcomes are potentially much much worse. They’re considered unlikely but of course we may have overlooked a feedback loop or tipping point too much. I honestly don’t feel as confident that climate change isn’t a big risk to our civilization when it’s likely to stress multiple infrastructures at once (mainly, food supply combined with a need to change our energy usage combined with a need to provide more AC and refrigeration as a matter of survival in some regions combined with sea levels rising which may eat on valuable land and cities).
I’m not saying “these people are evil and irredeemable, ignore them”. But I’m saying they are being fundamentally irrational about it. “You can’t reason a person out of a position they didn’t reason themselves in”. In other words, I don’t think it’s worth worrying about not mentioning climate change merely for the sake of not alienating them when the result is it will alienate many more people on other sides of the spectrum. Besides, those among those people who think like you might also go “oh well these guys are wrong about climate change but I can’t hold it against them since they had to put together a compromise statement”. I think as of now many minimizing attitudes towards AI risk are also irrational, but it’s still a much newer topic and a more speculative one, with less evidence behind it. I think people might still be in the “figuring things out” stage for that, while for climate change, opinions are very much fossilized, and in some cases determined by things other than rational evaluation of the evidence. Basically, I think in this specific circumstance, there is no way of being neutral: either mentioning or not mentioning climate change gets read as a signal. You can only pick which side of the issue to stand on, and if you think you have a better shot with people who ground their thinking in evidence, then the side that believes climate change is real has more of those.