Thank you, Konstantin, for a closely argued response. I agree with much of what you say (though I would stretch much longer the 1000 years figure). Any disagreement with your conclusion (“there is probably room for some funding for climate change from a longtermist perspective, … I’d be happy to see a small fraction of longtermist resources directed to this problem”) may pertain only to numbers—to the exact size of the “small fraction”. I agree, specifically, that it TENDS to be MUCH more urgent to fund AI safety and biosecurity work, from a longtermist perspective. Remember that I ENDORSE the “admittedly greater extinction potential and scantier funding of some other existential risks as broad categories”...
Your point about what one may call the potential reversibility of climate change or of its worst sequelae is definitely worth developing. I have discussed it with others but haven’t seen it developed at length in writing. Sometimes it is what longtermists seem to mean when they write that climate change is not a neglected area. But analytically it is separate from e.g. the claim that others are already on the case of curbing concurrent emissions (which are therefore not a neglected area). A related challenge for you: The potential reversibility of a long-term risk is not only a reason to prioritize the prevention of other risks, the onset of which is irreversible and hence more calamitous, over preventing that risk. It is also a reason to prioritize one area of work on that risk, namely, its effective reversal. Indeed, when I wrote that longtermists should invest in geoengineering, I had in mind primarily strategies like carbon capture, which could be seen as reversing some harms of our greenhouse gas emissions.
Thank you, Konstantin, for a closely argued response. I agree with much of what you say (though I would stretch much longer the 1000 years figure). Any disagreement with your conclusion (“there is probably room for some funding for climate change from a longtermist perspective, … I’d be happy to see a small fraction of longtermist resources directed to this problem”) may pertain only to numbers—to the exact size of the “small fraction”. I agree, specifically, that it TENDS to be MUCH more urgent to fund AI safety and biosecurity work, from a longtermist perspective. Remember that I ENDORSE the “admittedly greater extinction potential and scantier funding of some other existential risks as broad categories”...
Your point about what one may call the potential reversibility of climate change or of its worst sequelae is definitely worth developing. I have discussed it with others but haven’t seen it developed at length in writing. Sometimes it is what longtermists seem to mean when they write that climate change is not a neglected area. But analytically it is separate from e.g. the claim that others are already on the case of curbing concurrent emissions (which are therefore not a neglected area). A related challenge for you: The potential reversibility of a long-term risk is not only a reason to prioritize the prevention of other risks, the onset of which is irreversible and hence more calamitous, over preventing that risk. It is also a reason to prioritize one area of work on that risk, namely, its effective reversal. Indeed, when I wrote that longtermists should invest in geoengineering, I had in mind primarily strategies like carbon capture, which could be seen as reversing some harms of our greenhouse gas emissions.
Nir