Hi Rocket, thanks for sharing these thoughts (and I’m sorry it’s taken me so long to get back to you)!
To respond to your specific points:
Improving the magnitude of impact while holding tractability and neglectedness constant would increase impact on the margin, ie, if we revise our impact estimates upwards at every possible level of funding, then climate change efforts become more cost-effective. 2. It seems like considering co-benefits does affect tractability, but the tractability of these co-benefit issue areas, rather than of climate change per se. Eg, addressing energy poverty becomes more tractable as we discover effective interventions to address it.
I certainly agree with this—was only trying to communicate that increases in importance might not be enough to make climate change more cost-effective on the margin, especially if tractability and neglectedness are low. Certainly that should be evaluated on a case-by-case basis.
To be fair, other x-risks are also time-limited. Eg if nuclear war is currently going to happen in t years, then by next year we will only have t−1 years left to solve it. The same holds for a catastrophic AI event. It seems like ~the nuance~ is that in the climate change case, tractability diminishes the longer we wait, as well as the timeframe.
This is true (and very well-phrased!). I think there’s some additional ~ nuance ~ which is that the harms of climate change are scalar, whereas the risks of nuclear war or catastrophic AI seem to be more binary. I’ll have to think more about how to talk about that distinction, but it was definitely part of what I was thinking about when I wrote this section of the post.
Hi Rocket, thanks for sharing these thoughts (and I’m sorry it’s taken me so long to get back to you)!
To respond to your specific points:
I certainly agree with this—was only trying to communicate that increases in importance might not be enough to make climate change more cost-effective on the margin, especially if tractability and neglectedness are low. Certainly that should be evaluated on a case-by-case basis.
This is true (and very well-phrased!). I think there’s some additional ~ nuance ~ which is that the harms of climate change are scalar, whereas the risks of nuclear war or catastrophic AI seem to be more binary. I’ll have to think more about how to talk about that distinction, but it was definitely part of what I was thinking about when I wrote this section of the post.