Your framework appears to focus on emissions, which makes sense for extreme GHG warming scenarios. However, for addressing something that could happen in the next 10 years, like the coincident extreme weather, I think it would make more sense to focus on resilience. Are you saying that you will do more research on resilience/​consider grants on resilience?
By the way, are your donors getting shorter AGI timelines as many EAs are? That would be a reason to focus more on climate catastrophes that could happen in the next couple decades. And is anyone concerned about AGI-induced climate change? AGI may not want to intentionally destroy humans if it has some kindness, but just by scaling up very rapidly, there would be a lot of heat production (especially if it goes to nuclear power or space solar power).
1. We focus on minimizing expected damage and that could, in principle, include interventions focused on resilience. Whether we spend more time on researching this depends principally on the likelihood of this changing conclusions /​ our grantmaking.
2. I think the recent review papers on tipping points (e.g. recently discussed here) agree that there isn’t really a scenario of super-abrupt dramatic change, most tipping elements take decades to materialize. As such, I don’t think it is super-likely that we get civilizationally overwhelmed by these events, so the value of resilience to these scenarios needs to be probability-weighted by these scenarios being fairly unlikely. (Aside: I am very onboard will ALLFED’s mission, but I think it makes more sense to be motivated from the perspective of nuclear winter risk than from climate risks to food security that seem fairly remote on a civilizational scale).
3. I think the proximate impact of donors getting shorter AI-timelines should be for those donors to engage in reducing AI risk, not changing within-climate-prioritization. I think it is quite unclear what the climate implications of rapid AGI would be (could be those you mention, but also could be rapid progress in clean tech etc.). I do agree that it is a mild update towards shorter-term actions (e.g. if you think there is a 10% chance of AGI by 2032, then this somewhat decreases the value of climate actions that would have most of their effects only in 2032) but it does not seem dramatic.
Hi Johannes!
Your framework appears to focus on emissions, which makes sense for extreme GHG warming scenarios. However, for addressing something that could happen in the next 10 years, like the coincident extreme weather, I think it would make more sense to focus on resilience. Are you saying that you will do more research on resilience/​consider grants on resilience?
By the way, are your donors getting shorter AGI timelines as many EAs are? That would be a reason to focus more on climate catastrophes that could happen in the next couple decades. And is anyone concerned about AGI-induced climate change? AGI may not want to intentionally destroy humans if it has some kindness, but just by scaling up very rapidly, there would be a lot of heat production (especially if it goes to nuclear power or space solar power).
Hi Dave!
Sorry for the delay.
1. We focus on minimizing expected damage and that could, in principle, include interventions focused on resilience. Whether we spend more time on researching this depends principally on the likelihood of this changing conclusions /​ our grantmaking.
2. I think the recent review papers on tipping points (e.g. recently discussed here) agree that there isn’t really a scenario of super-abrupt dramatic change, most tipping elements take decades to materialize. As such, I don’t think it is super-likely that we get civilizationally overwhelmed by these events, so the value of resilience to these scenarios needs to be probability-weighted by these scenarios being fairly unlikely. (Aside: I am very onboard will ALLFED’s mission, but I think it makes more sense to be motivated from the perspective of nuclear winter risk than from climate risks to food security that seem fairly remote on a civilizational scale).
3. I think the proximate impact of donors getting shorter AI-timelines should be for those donors to engage in reducing AI risk, not changing within-climate-prioritization. I think it is quite unclear what the climate implications of rapid AGI would be (could be those you mention, but also could be rapid progress in clean tech etc.). I do agree that it is a mild update towards shorter-term actions (e.g. if you think there is a 10% chance of AGI by 2032, then this somewhat decreases the value of climate actions that would have most of their effects only in 2032) but it does not seem dramatic.