Thank you for writing this post. I want to point out that your conclusions are highly dependent on your ethical and empirical assumptions. Here are some thoughts about what could change your conclusion:
If you donate to the top charities that are recommended by Founders Pledge, you can probably do much better than 30$/ton. I have not been able to find the precise numbers quickly, but if I remember correctly, 1$/ton is possible under reasonable assumptions. This would change your average estimate to $25,000 per life saved.
Let us assume that the maximal number of happy human beings that could live on earth is reduced by 1 billion by rising sea levels, loss of agricultural land area etc. Let us further assume that this consequences of global warming persist for 100,000 years and that there is a probability of 1% that no game-changing technology such as advanced geo-engineering will be developed. This would mean that 10^12 QALYs are lost and the effectiveness of a dollar would rise. Of course, this argument relies on your rate of temporal accounting.
Climate change could also increase other existential risks. For example, there could be a war about ressources that is fought by nuclear weapons, synthetic pathogens or malevolent AIs.
The message I want to send is not that your analysis is wrong, but that evaluating longtermist interventions is a huge mess since different reasonable assumptions lead to wildly diverging answers.
‘Climate change could also increase other existential risks. For example, there could be a war about ressources that is fought by nuclear weapons, synthetic pathogens or malevolent AIs.’
To add to this—solar geoengineering could be a major risk (and risk factor for inter-state conflict) that becomes increasingly likely under severe AGW scenarios (people accept more drastic measures in desparate circumstances).
I’m glad they’re looking for charities in the sub $10/ton range! I suspect there is limited room for funding at that value, but it’s still marginally good. Finding cheaper climate interventions is really the only part of this equation we can control.
I disagree with your 10^12 QALYs analysis. First, I need a citation on the assumption that livable space will be reduced by 1 billion. Second, the earth isn’t at maximum capacity, and I’m not sure population trends are expected to peak above capacity. Third, you shouldn’t project out 100,000 years without temporal discounting because our ability to predict the far future is bad and we should use temporal discounting to avoid overconfidence there. For example, it’s hard to predict what technology will arise in the future, and assuming a 1% chance that we’ll never develop geo-engineering over such a long timespan is a bad assumption.
I agree about existential risks. If climate change causes geopolitical stress that increases the chance of nuclear war by even a small amount, that’s obviously bad. I included an x-risk model where we assume climate change kills all humans, but I understand that x-risk would be bad above and beyond the tragic loss of all currently living individuals, so cashing that risk out into dollars per life is maybe incorrect.
About longtermism in general, I basically think EAs are super overconfident about long term predictions, and don’t apply exponential discounting nearly enough. Even this analysis going out 100 years is probably overconfident because so much is going to change over that time.
Thank you for writing this post. I want to point out that your conclusions are highly dependent on your ethical and empirical assumptions. Here are some thoughts about what could change your conclusion:
If you donate to the top charities that are recommended by Founders Pledge, you can probably do much better than 30$/ton. I have not been able to find the precise numbers quickly, but if I remember correctly, 1$/ton is possible under reasonable assumptions. This would change your average estimate to $25,000 per life saved.
Let us assume that the maximal number of happy human beings that could live on earth is reduced by 1 billion by rising sea levels, loss of agricultural land area etc. Let us further assume that this consequences of global warming persist for 100,000 years and that there is a probability of 1% that no game-changing technology such as advanced geo-engineering will be developed. This would mean that 10^12 QALYs are lost and the effectiveness of a dollar would rise. Of course, this argument relies on your rate of temporal accounting.
Climate change could also increase other existential risks. For example, there could be a war about ressources that is fought by nuclear weapons, synthetic pathogens or malevolent AIs.
The message I want to send is not that your analysis is wrong, but that evaluating longtermist interventions is a huge mess since different reasonable assumptions lead to wildly diverging answers.
Also, if you combine $1/ton with the estimated lives per ton from Bressler’s paper, then you get $4,400 per life saved.
I think this might be the article from Founders Pledge that you are thinking of 💚
‘Climate change could also increase other existential risks. For example, there could be a war about ressources that is fought by nuclear weapons, synthetic pathogens or malevolent AIs.’
To add to this—solar geoengineering could be a major risk (and risk factor for inter-state conflict) that becomes increasingly likely under severe AGW scenarios (people accept more drastic measures in desparate circumstances).
I’m glad they’re looking for charities in the sub $10/ton range! I suspect there is limited room for funding at that value, but it’s still marginally good. Finding cheaper climate interventions is really the only part of this equation we can control.
I disagree with your 10^12 QALYs analysis. First, I need a citation on the assumption that livable space will be reduced by 1 billion. Second, the earth isn’t at maximum capacity, and I’m not sure population trends are expected to peak above capacity. Third, you shouldn’t project out 100,000 years without temporal discounting because our ability to predict the far future is bad and we should use temporal discounting to avoid overconfidence there. For example, it’s hard to predict what technology will arise in the future, and assuming a 1% chance that we’ll never develop geo-engineering over such a long timespan is a bad assumption.
I agree about existential risks. If climate change causes geopolitical stress that increases the chance of nuclear war by even a small amount, that’s obviously bad. I included an x-risk model where we assume climate change kills all humans, but I understand that x-risk would be bad above and beyond the tragic loss of all currently living individuals, so cashing that risk out into dollars per life is maybe incorrect.
About longtermism in general, I basically think EAs are super overconfident about long term predictions, and don’t apply exponential discounting nearly enough. Even this analysis going out 100 years is probably overconfident because so much is going to change over that time.