As I explain in my comment, I really don’t think that either claim is the source of most disagreements—the relative timing of AI, nano, and biotech versus climate impact are the real crux.
I think there’s a difference between being source of most uncertainty and source of biggest disagreement.
As I understand cwa’s “Claim 1” it really just says “the largest uncertainty in the badness of climate change is the level of damage not emissions or warming levels which are less uncertain”.
This can be true even if one thinks the indirect existential risk of climate is very low.
Similarly, the core of cwa’s second claim does not seem to be a particular statement about the size of the risk but rather that current knowledge does not constrain this very much and that we cannot rule out high risks based on models that are extremely limited and a priori exclude those mechanisms that people worrying about indirect existential/catastrophic risk from climate think contain the majority of the damage.
I’m claiming, per the other comment, that relative speed would be both the substantive largest uncertainty, and the largest source of disagreement.
Despite Claim 1, if technology changes rapidly, the emissions and warming levels which are “less uncertain” could change drastically faster, which changes the question in important ways. And I think claim 2 is mistaken in its implication, in that even if the risk of existential catastrophe from AI and biorisk are not obviously several orders of magnitude higher—though I claim that they are—the probability of having radically transformative technology of one the the two types is much less arguably of the same order of magnitude, and that’s the necessary crux.
As I explain in my comment, I really don’t think that either claim is the source of most disagreements—the relative timing of AI, nano, and biotech versus climate impact are the real crux.
I think there’s a difference between being source of most uncertainty and source of biggest disagreement.
As I understand cwa’s “Claim 1” it really just says “the largest uncertainty in the badness of climate change is the level of damage not emissions or warming levels which are less uncertain”.
This can be true even if one thinks the indirect existential risk of climate is very low.
Similarly, the core of cwa’s second claim does not seem to be a particular statement about the size of the risk but rather that current knowledge does not constrain this very much and that we cannot rule out high risks based on models that are extremely limited and a priori exclude those mechanisms that people worrying about indirect existential/catastrophic risk from climate think contain the majority of the damage.
I’m claiming, per the other comment, that relative speed would be both the substantive largest uncertainty, and the largest source of disagreement.
Despite Claim 1, if technology changes rapidly, the emissions and warming levels which are “less uncertain” could change drastically faster, which changes the question in important ways. And I think claim 2 is mistaken in its implication, in that even if the risk of existential catastrophe from AI and biorisk are not obviously several orders of magnitude higher—though I claim that they are—the probability of having radically transformative technology of one the the two types is much less arguably of the same order of magnitude, and that’s the necessary crux.