1. We focus on minimizing expected damage and that could, in principle, include interventions focused on resilience. Whether we spend more time on researching this depends principally on the likelihood of this changing conclusions / our grantmaking.
2. I think the recent review papers on tipping points (e.g. recently discussed here) agree that there isn’t really a scenario of super-abrupt dramatic change, most tipping elements take decades to materialize. As such, I don’t think it is super-likely that we get civilizationally overwhelmed by these events, so the value of resilience to these scenarios needs to be probability-weighted by these scenarios being fairly unlikely. (Aside: I am very onboard will ALLFED’s mission, but I think it makes more sense to be motivated from the perspective of nuclear winter risk than from climate risks to food security that seem fairly remote on a civilizational scale).
3. I think the proximate impact of donors getting shorter AI-timelines should be for those donors to engage in reducing AI risk, not changing within-climate-prioritization. I think it is quite unclear what the climate implications of rapid AGI would be (could be those you mention, but also could be rapid progress in clean tech etc.). I do agree that it is a mild update towards shorter-term actions (e.g. if you think there is a 10% chance of AGI by 2032, then this somewhat decreases the value of climate actions that would have most of their effects only in 2032) but it does not seem dramatic.
Hi Dave!
Sorry for the delay.
1. We focus on minimizing expected damage and that could, in principle, include interventions focused on resilience. Whether we spend more time on researching this depends principally on the likelihood of this changing conclusions / our grantmaking.
2. I think the recent review papers on tipping points (e.g. recently discussed here) agree that there isn’t really a scenario of super-abrupt dramatic change, most tipping elements take decades to materialize. As such, I don’t think it is super-likely that we get civilizationally overwhelmed by these events, so the value of resilience to these scenarios needs to be probability-weighted by these scenarios being fairly unlikely. (Aside: I am very onboard will ALLFED’s mission, but I think it makes more sense to be motivated from the perspective of nuclear winter risk than from climate risks to food security that seem fairly remote on a civilizational scale).
3. I think the proximate impact of donors getting shorter AI-timelines should be for those donors to engage in reducing AI risk, not changing within-climate-prioritization. I think it is quite unclear what the climate implications of rapid AGI would be (could be those you mention, but also could be rapid progress in clean tech etc.). I do agree that it is a mild update towards shorter-term actions (e.g. if you think there is a 10% chance of AGI by 2032, then this somewhat decreases the value of climate actions that would have most of their effects only in 2032) but it does not seem dramatic.