I’m glad they’re looking for charities in the sub $10/ton range! I suspect there is limited room for funding at that value, but it’s still marginally good. Finding cheaper climate interventions is really the only part of this equation we can control.
I disagree with your 10^12 QALYs analysis. First, I need a citation on the assumption that livable space will be reduced by 1 billion. Second, the earth isn’t at maximum capacity, and I’m not sure population trends are expected to peak above capacity. Third, you shouldn’t project out 100,000 years without temporal discounting because our ability to predict the far future is bad and we should use temporal discounting to avoid overconfidence there. For example, it’s hard to predict what technology will arise in the future, and assuming a 1% chance that we’ll never develop geo-engineering over such a long timespan is a bad assumption.
I agree about existential risks. If climate change causes geopolitical stress that increases the chance of nuclear war by even a small amount, that’s obviously bad. I included an x-risk model where we assume climate change kills all humans, but I understand that x-risk would be bad above and beyond the tragic loss of all currently living individuals, so cashing that risk out into dollars per life is maybe incorrect.
About longtermism in general, I basically think EAs are super overconfident about long term predictions, and don’t apply exponential discounting nearly enough. Even this analysis going out 100 years is probably overconfident because so much is going to change over that time.
I’m glad they’re looking for charities in the sub $10/ton range! I suspect there is limited room for funding at that value, but it’s still marginally good. Finding cheaper climate interventions is really the only part of this equation we can control.
I disagree with your 10^12 QALYs analysis. First, I need a citation on the assumption that livable space will be reduced by 1 billion. Second, the earth isn’t at maximum capacity, and I’m not sure population trends are expected to peak above capacity. Third, you shouldn’t project out 100,000 years without temporal discounting because our ability to predict the far future is bad and we should use temporal discounting to avoid overconfidence there. For example, it’s hard to predict what technology will arise in the future, and assuming a 1% chance that we’ll never develop geo-engineering over such a long timespan is a bad assumption.
I agree about existential risks. If climate change causes geopolitical stress that increases the chance of nuclear war by even a small amount, that’s obviously bad. I included an x-risk model where we assume climate change kills all humans, but I understand that x-risk would be bad above and beyond the tragic loss of all currently living individuals, so cashing that risk out into dollars per life is maybe incorrect.
About longtermism in general, I basically think EAs are super overconfident about long term predictions, and don’t apply exponential discounting nearly enough. Even this analysis going out 100 years is probably overconfident because so much is going to change over that time.