Thanks for this post. Appreciate the “empirical ideas about tackling climate change” but also found the concepts of climate change multiplying very bad outcomes useful.
I wanted to pick up on the “urgency” idea. Doesn’t urgency just mean that there are more ways in which it is important, because it has an interactive effect with other issues? I.e. considering urgency means that the importance/scale is high now, even if it might not be as high in the future?
Happy to be challenged on this; I use the ITN framework a lot (I’m sure we all do), so substantial criticism of that model seems worth delving into.
I’d agree that “urgency” is subsumed by “importance”, but it’s also worth pointing out explicitly, as something that might be overlooked if it is not mentioned.
Yes, the urgency point could indeed fall within the importance lens as you suggest. My concern was that some crude measures of importance didn’t consider this interactive effect in a dynamic world.
In Owen C-B’s ‘Prospecting for Gold’ talk, he briefly talks about urgency as part of tractability (something tractable now could be less tractable in the future).
I argued in my 80,000 Hours podcast that there might be something to a separate component of urgency. We generally say cost-effectiveness is something like total increase in utility per dollar, not time discounted. This can be worked out for AI and alternate foods, which we have done here. Let’s say they were the same cost-effectiveness, so we should be putting money into both of them. However, because there is a higher probability that agricultural catastrophes would happen in the next 10 years than AI, the optimal course of action is to spend more of the optimal amount of money on alternate foods in the next 10 years than AI. A way of thinking about this is that the return on investment of alternate foods is significantly higher. And we might even be able to monetize that return on investment by making a deal with a goverment and then have more money to spend on AI. This logic applies for climate disasters that could happen soon, like coincident extreme weather causing floods or droughts on multiple continents. However, I don’t think it applies to tail risk of climate change (greater than 5°C global warming) because that could not happen soon. Of course one could argue that we should act now to reduce climate tail risk. However, if there are many other things we can do to increase welfare with a higher return on investment, we should do those things first. And then we will have more money to deal with the problem, such as paying for expensive air removal of CO2.
Thanks for this post. Appreciate the “empirical ideas about tackling climate change” but also found the concepts of climate change multiplying very bad outcomes useful.
I wanted to pick up on the “urgency” idea. Doesn’t urgency just mean that there are more ways in which it is important, because it has an interactive effect with other issues? I.e. considering urgency means that the importance/scale is high now, even if it might not be as high in the future?
Happy to be challenged on this; I use the ITN framework a lot (I’m sure we all do), so substantial criticism of that model seems worth delving into.
I’d agree that “urgency” is subsumed by “importance”, but it’s also worth pointing out explicitly, as something that might be overlooked if it is not mentioned.
Yes, the urgency point could indeed fall within the importance lens as you suggest. My concern was that some crude measures of importance didn’t consider this interactive effect in a dynamic world.
In Owen C-B’s ‘Prospecting for Gold’ talk, he briefly talks about urgency as part of tractability (something tractable now could be less tractable in the future).
I argued in my 80,000 Hours podcast that there might be something to a separate component of urgency. We generally say cost-effectiveness is something like total increase in utility per dollar, not time discounted. This can be worked out for AI and alternate foods, which we have done here. Let’s say they were the same cost-effectiveness, so we should be putting money into both of them. However, because there is a higher probability that agricultural catastrophes would happen in the next 10 years than AI, the optimal course of action is to spend more of the optimal amount of money on alternate foods in the next 10 years than AI. A way of thinking about this is that the return on investment of alternate foods is significantly higher. And we might even be able to monetize that return on investment by making a deal with a goverment and then have more money to spend on AI.
This logic applies for climate disasters that could happen soon, like coincident extreme weather causing floods or droughts on multiple continents. However, I don’t think it applies to tail risk of climate change (greater than 5°C global warming) because that could not happen soon. Of course one could argue that we should act now to reduce climate tail risk. However, if there are many other things we can do to increase welfare with a higher return on investment, we should do those things first. And then we will have more money to deal with the problem, such as paying for expensive air removal of CO2.