Thanks for this very thoughtful and well-researched post!
Very much agree with “Claim 1”, this also seems not only the most severe uncertainty and disagreements between EAs (e.g. John and I disagree on this even though we agree on the “Good News on Climate Change” directional update), but also generally expert as well as the published literature (the variation in damage functions is larger than climate sensitivity & emissions scenarios).
I also agree with a large part of “Claim 2”, in particular that until now the estimates on indirect existential risk are not particularly strongly justified (the discussion here is interesting on this).
Great to hear, thanks! Appreciate the link to the discussion, and the points you make—I definitely agree that there’s no reason to think that the direct and indirect risks from climate change are anywhere near the same order of magnitude, and that this is one way an unjustified sense of confidence can creep in.
As I explain in my comment, I really don’t think that either claim is the source of most disagreements—the relative timing of AI, nano, and biotech versus climate impact are the real crux.
I think there’s a difference between being source of most uncertainty and source of biggest disagreement.
As I understand cwa’s “Claim 1” it really just says “the largest uncertainty in the badness of climate change is the level of damage not emissions or warming levels which are less uncertain”.
This can be true even if one thinks the indirect existential risk of climate is very low.
Similarly, the core of cwa’s second claim does not seem to be a particular statement about the size of the risk but rather that current knowledge does not constrain this very much and that we cannot rule out high risks based on models that are extremely limited and a priori exclude those mechanisms that people worrying about indirect existential/catastrophic risk from climate think contain the majority of the damage.
I’m claiming, per the other comment, that relative speed would be both the substantive largest uncertainty, and the largest source of disagreement.
Despite Claim 1, if technology changes rapidly, the emissions and warming levels which are “less uncertain” could change drastically faster, which changes the question in important ways. And I think claim 2 is mistaken in its implication, in that even if the risk of existential catastrophe from AI and biorisk are not obviously several orders of magnitude higher—though I claim that they are—the probability of having radically transformative technology of one the the two types is much less arguably of the same order of magnitude, and that’s the necessary crux.
Thanks for this very thoughtful and well-researched post!
Very much agree with “Claim 1”, this also seems not only the most severe uncertainty and disagreements between EAs (e.g. John and I disagree on this even though we agree on the “Good News on Climate Change” directional update), but also generally expert as well as the published literature (the variation in damage functions is larger than climate sensitivity & emissions scenarios).
I also agree with a large part of “Claim 2”, in particular that until now the estimates on indirect existential risk are not particularly strongly justified (the discussion here is interesting on this).
Great to hear, thanks! Appreciate the link to the discussion, and the points you make—I definitely agree that there’s no reason to think that the direct and indirect risks from climate change are anywhere near the same order of magnitude, and that this is one way an unjustified sense of confidence can creep in.
As I explain in my comment, I really don’t think that either claim is the source of most disagreements—the relative timing of AI, nano, and biotech versus climate impact are the real crux.
I think there’s a difference between being source of most uncertainty and source of biggest disagreement.
As I understand cwa’s “Claim 1” it really just says “the largest uncertainty in the badness of climate change is the level of damage not emissions or warming levels which are less uncertain”.
This can be true even if one thinks the indirect existential risk of climate is very low.
Similarly, the core of cwa’s second claim does not seem to be a particular statement about the size of the risk but rather that current knowledge does not constrain this very much and that we cannot rule out high risks based on models that are extremely limited and a priori exclude those mechanisms that people worrying about indirect existential/catastrophic risk from climate think contain the majority of the damage.
I’m claiming, per the other comment, that relative speed would be both the substantive largest uncertainty, and the largest source of disagreement.
Despite Claim 1, if technology changes rapidly, the emissions and warming levels which are “less uncertain” could change drastically faster, which changes the question in important ways. And I think claim 2 is mistaken in its implication, in that even if the risk of existential catastrophe from AI and biorisk are not obviously several orders of magnitude higher—though I claim that they are—the probability of having radically transformative technology of one the the two types is much less arguably of the same order of magnitude, and that’s the necessary crux.