Hi Ann! Congratulations on this excellent piece :)
I want to bring up a portion I disagreed with and then address another section that really struck me. The former is:
Of course, co-benefits only affect the importance of an issue and don’t affect tractability or neglectedness. Therefore, they may not affect marginal cost-effectiveness.
I think I disagree with this for two reasons:
Improving the magnitude of impact while holding tractability and neglectedness constant would increase impact on the margin, ie, if we revise our impact estimates upwards at every possible level of funding, then climate change efforts become more cost-effective.
It seems like considering co-benefits does affect tractability, but the tractability of these co-benefit issue areas, rather than of climate change per se. Eg, addressing energy poverty becomes more tractable as we discover effective interventions to address it.
The section that struck me was:
climate change is somewhat unique in that its harms are horrible and have time-limited solutions; the growth rate of the harms is larger, and the longer we wait to solve them the less we will be able to do.
To be fair, other x-risks are also time-limited. Eg if nuclear war is currently going to happen in t years, then by next year we will only have t−1 years left to solve it. The same holds for a catastrophic AI event. It seems like ~the nuance~ is that in the climate change case, tractability diminishes the longer we wait, as well as the timeframe. Compared to the AI case, for example, where the risk itself is unclear, I think this weighing makes climate change mitigation much more attractive.
Hi Rocket, thanks for sharing these thoughts (and I’m sorry it’s taken me so long to get back to you)!
To respond to your specific points:
Improving the magnitude of impact while holding tractability and neglectedness constant would increase impact on the margin, ie, if we revise our impact estimates upwards at every possible level of funding, then climate change efforts become more cost-effective. 2. It seems like considering co-benefits does affect tractability, but the tractability of these co-benefit issue areas, rather than of climate change per se. Eg, addressing energy poverty becomes more tractable as we discover effective interventions to address it.
I certainly agree with this—was only trying to communicate that increases in importance might not be enough to make climate change more cost-effective on the margin, especially if tractability and neglectedness are low. Certainly that should be evaluated on a case-by-case basis.
To be fair, other x-risks are also time-limited. Eg if nuclear war is currently going to happen in t years, then by next year we will only have t−1 years left to solve it. The same holds for a catastrophic AI event. It seems like ~the nuance~ is that in the climate change case, tractability diminishes the longer we wait, as well as the timeframe.
This is true (and very well-phrased!). I think there’s some additional ~ nuance ~ which is that the harms of climate change are scalar, whereas the risks of nuclear war or catastrophic AI seem to be more binary. I’ll have to think more about how to talk about that distinction, but it was definitely part of what I was thinking about when I wrote this section of the post.
Hi Ann! Congratulations on this excellent piece :)
I want to bring up a portion I disagreed with and then address another section that really struck me. The former is:
I think I disagree with this for two reasons:
Improving the magnitude of impact while holding tractability and neglectedness constant would increase impact on the margin, ie, if we revise our impact estimates upwards at every possible level of funding, then climate change efforts become more cost-effective.
It seems like considering co-benefits does affect tractability, but the tractability of these co-benefit issue areas, rather than of climate change per se. Eg, addressing energy poverty becomes more tractable as we discover effective interventions to address it.
The section that struck me was:
To be fair, other x-risks are also time-limited. Eg if nuclear war is currently going to happen in t years, then by next year we will only have t−1 years left to solve it. The same holds for a catastrophic AI event. It seems like ~the nuance~ is that in the climate change case, tractability diminishes the longer we wait, as well as the timeframe. Compared to the AI case, for example, where the risk itself is unclear, I think this weighing makes climate change mitigation much more attractive.
Thanks for a great read!
Hi Rocket, thanks for sharing these thoughts (and I’m sorry it’s taken me so long to get back to you)!
To respond to your specific points:
I certainly agree with this—was only trying to communicate that increases in importance might not be enough to make climate change more cost-effective on the margin, especially if tractability and neglectedness are low. Certainly that should be evaluated on a case-by-case basis.
This is true (and very well-phrased!). I think there’s some additional ~ nuance ~ which is that the harms of climate change are scalar, whereas the risks of nuclear war or catastrophic AI seem to be more binary. I’ll have to think more about how to talk about that distinction, but it was definitely part of what I was thinking about when I wrote this section of the post.