Thanks for rewriting and republishing this. All very interesting.
On this new revised version, something that stood out to me was the truly extreme range between the optimistic and pessimistic scenarios you describe.
I think the relative cost-effectiveness range you’ve given spans fully ten orders of magnitude, or a range of 10,000,000,000x. Even by our standards that’s a lot. If we’re really this uncertain it seems we can say almost nothing. But I don’t think we are that uncertain.
By choosing a value out in the tail for 4 different input variables all at once you’ve taken us way out into the extremes of the uncertainty bounds. It looks to me like for these scenarios you’ve chosen the 1st and 99th percentiles for SCC, η, cost of abatement, and gain from doing health, all at once.
If that’s right you’re ending up at more like 0.01 * 0.01 * 0.01 * 0.01 --> 0.000001th percentile on the cost-effectiveness output on either end (not really, because you can’t actually combine uncertainty distributions like this, but you get my general point). That seems too extreme a value to be useful to me.
Maybe you could put your distributions for the inputs into Guesstimate, which will do simulations drawing from and multiplying the inputs, and then choose the 5th and 95th percentile values for the outputs? That would go a long way towards addressing this issue.
Hope this helps, let me know if I’ve misunderstood anything — Rob
Thanks Rob for taking the time to comment and my sincere apologies for the delay in replying.
There really is a lot of uncertainty here. Note that all parameters estimates are based on or grounded in empirical and published estimates. Even my adjustment for the social cost of carbon being over- or underestimate by 10x corresponds to values with similar orders of magnitude you can find in the literature—see cell comments of the spreadsheet model. For instance, one recent paper by a renowned climate economist finds that under different model specifications the SCC ranges from $3.38/tCO2e to $21,889/tCO2e. Ditto with what the eta parameter for the income adjustment.
The “realistic estimate” model scenario is what I perceive to use parameters estimates around which there more consensus, but that’s just my opinion and one can reasonably disagree with these choices.
I used the extreme scenarios to highlight the uncertainty and to make statements such as “Even if you believe the true social cost of carbon is higher than most models suggest (i.e. $20k per tonne, the most extreme value in the literature), then that still often is not enough to beat global development interventions”.
Generally, my agenda was probably a bit simpler than people might have supposed. This was not intended to be the last word on whether climate change or development interventions are always better. Rather it’s a starting point and “choose your own adventure” model to help prioritizing between a concrete climate and a concrete development charity.
Note that there are four parameters that drive the results of this analysis (the SCC, the income adjustment eta, the cost to avert CO2, and the effectiveness of global dev/health vs. cash). For the first two, there really is a lot more uncertainty, but for the latter two, it’s more clear. This makes the model actually valuable and with action guiding potential.
For instance, if you’re a small donor and can’t decide between GiveDirectly and the Coalition for Rainforest Nations, then, if you believe that CfRN really has a cost-effectiveness of $0.02 / tCO2e averted, in many scenarios, especially the realistic one around which there is most consensus, it will often beat unconditional cash-transfers, even if you believe that social cost of carbon is quite low.
However, CfRN does lobbying, not a scalable intervention that one could invest a lot of money in. So, in contrast, if you’re a billionaire and are looking to decide between global development and climate change as a cause area for your foundation, then perhaps global development might be a better bet.
Re: Monte Carlos—I think some of the parameter inputs rely on Monte Carlos already. My hunch is that there’s no free lunch here that would reduce the uncertainty much over and above the point estimate / realistic scenario, but this is definitely that I’d like to see other people explore in future research.
I think working this through on guesttimate rather than mulitplying point estimates is really important.
I tried doing it myself with similar figures, and I found the climate change came out ~80x better than global health (even though my point estimate that that global health is better) - which suggests the title of the article could maybe use editing!
When you’re dealing with huge uncertainties like these, the tails of the distribution can drive the EV, so point estimates can be pretty misleading.
Rob’s point that by multiplying together extreme values, your confidence intervals are unreasonably wide.
Some of the confidence intervals you give for the individual parameters also seem too wide (and seem to not be mathematically possible to fit to a lognormal distribution).
I think I’ll just leave the title for now, because it is confusing as it is and I’m not sure if it’s worth it to redo/rewrite the analysis. I should probably have just called it “How to compare the relative effectiveness of development vs. climate interventions”. I’ll make a note in the beginning of the post linking to your guesstimate, saying that you found different results.
I can’t quite follow your analysis from the screenshots (perhaps you could link the models and the assumptions for others). For instance, I’m not sure why the input value of money going to Americans vs. GiveDirectly recipients is 23 to 350.
But generally, I agree that Monte Carlo simulations and minding the distributions can be valuable for better error propagation. Also, I was probably being unclear but my analysis was not supposed to be a confidence intervals but rather my the best guess and extreme scenarios.
Echoing what Greg Lewis said about hobbyists modelling the C19 pandemic being perhaps not super productive, I’m also not sure how productive further empirical work such as this is on the EA forum (I don’t even know how many hits the forum gets generally, and this post in particular, how many climate modellers read it, etc.). I think maybe an org with more research capacity would be better suited to do further analysis on this. Or perhaps one could commission researchers with a background in climate modelling to do this (e.g. the author of this paper might be really qualified to do this: https://www.sciencedirect.com/science/article/pii/S014098831930218X ).
I do think more EA work on this topic would be useful for someone to do, since I don’t think it’s clear from a near-termist perspective that global health is more effective than climate change.
On guesstimate, there was an error and I was unable to save my model. If someone is looking to reproduce this though, I’d suggest they just make their own.
On the value of money to Americans vs. GiveDirectly recipients, my personal estimate was a lower ratio, because I think we should take into account some flow through effects and I think this causes convergence. I don’t think values like 10,000x are plausible for the all-considered tradeoff (even though the ratio could be 10,000x if we’re just considering the welfare of two individuals).
More here:
http://reflectivedisequilibrium.blogspot.com/2014/01/what-portion-of-boost-to-global-gdp.html
I was probably being unclear but my analysis was not supposed to be a confidence intervals but rather my the best guess and extreme scenarios.
I’m still a bit unclear how useful these are due to Rob’s point.
Thanks for rewriting and republishing this. All very interesting.
On this new revised version, something that stood out to me was the truly extreme range between the optimistic and pessimistic scenarios you describe.
I think the relative cost-effectiveness range you’ve given spans fully ten orders of magnitude, or a range of 10,000,000,000x. Even by our standards that’s a lot. If we’re really this uncertain it seems we can say almost nothing. But I don’t think we are that uncertain.
By choosing a value out in the tail for 4 different input variables all at once you’ve taken us way out into the extremes of the uncertainty bounds. It looks to me like for these scenarios you’ve chosen the 1st and 99th percentiles for SCC, η, cost of abatement, and gain from doing health, all at once.
If that’s right you’re ending up at more like 0.01 * 0.01 * 0.01 * 0.01 --> 0.000001th percentile on the cost-effectiveness output on either end (not really, because you can’t actually combine uncertainty distributions like this, but you get my general point). That seems too extreme a value to be useful to me.
Maybe you could put your distributions for the inputs into Guesstimate, which will do simulations drawing from and multiplying the inputs, and then choose the 5th and 95th percentile values for the outputs? That would go a long way towards addressing this issue.
Hope this helps, let me know if I’ve misunderstood anything — Rob
Thanks Rob for taking the time to comment and my sincere apologies for the delay in replying.
There really is a lot of uncertainty here. Note that all parameters estimates are based on or grounded in empirical and published estimates. Even my adjustment for the social cost of carbon being over- or underestimate by 10x corresponds to values with similar orders of magnitude you can find in the literature—see cell comments of the spreadsheet model. For instance, one recent paper by a renowned climate economist finds that under different model specifications the SCC ranges from $3.38/tCO2e to $21,889/tCO2e. Ditto with what the eta parameter for the income adjustment.
The “realistic estimate” model scenario is what I perceive to use parameters estimates around which there more consensus, but that’s just my opinion and one can reasonably disagree with these choices.
I used the extreme scenarios to highlight the uncertainty and to make statements such as “Even if you believe the true social cost of carbon is higher than most models suggest (i.e. $20k per tonne, the most extreme value in the literature), then that still often is not enough to beat global development interventions”.
Generally, my agenda was probably a bit simpler than people might have supposed. This was not intended to be the last word on whether climate change or development interventions are always better. Rather it’s a starting point and “choose your own adventure” model to help prioritizing between a concrete climate and a concrete development charity.
Note that there are four parameters that drive the results of this analysis (the SCC, the income adjustment eta, the cost to avert CO2, and the effectiveness of global dev/health vs. cash). For the first two, there really is a lot more uncertainty, but for the latter two, it’s more clear. This makes the model actually valuable and with action guiding potential.
For instance, if you’re a small donor and can’t decide between GiveDirectly and the Coalition for Rainforest Nations, then, if you believe that CfRN really has a cost-effectiveness of $0.02 / tCO2e averted, in many scenarios, especially the realistic one around which there is most consensus, it will often beat unconditional cash-transfers, even if you believe that social cost of carbon is quite low.
However, CfRN does lobbying, not a scalable intervention that one could invest a lot of money in. So, in contrast, if you’re a billionaire and are looking to decide between global development and climate change as a cause area for your foundation, then perhaps global development might be a better bet.
Re: Monte Carlos—I think some of the parameter inputs rely on Monte Carlos already. My hunch is that there’s no free lunch here that would reduce the uncertainty much over and above the point estimate / realistic scenario, but this is definitely that I’d like to see other people explore in future research.
I think working this through on guesttimate rather than mulitplying point estimates is really important.
I tried doing it myself with similar figures, and I found the climate change came out ~80x better than global health (even though my point estimate that that global health is better) - which suggests the title of the article could maybe use editing!
When you’re dealing with huge uncertainties like these, the tails of the distribution can drive the EV, so point estimates can be pretty misleading.
Here’s a screenshot of the model: https://www.dropbox.com/s/adtwlz3k2myv8gc/Screenshot 2020-05-25 20.57.11.png?dl=0
I also tried doing the calculations in a different way that I found more intuitive—where I estimate the ‘utils’ of each intervention: https://www.dropbox.com/s/8uczqc1qhi71lte/Screenshot 2020-05-25 20.58.02.png?dl=0
Some other reasons in favour of this approach:
Rob’s point that by multiplying together extreme values, your confidence intervals are unreasonably wide.
Some of the confidence intervals you give for the individual parameters also seem too wide (and seem to not be mathematically possible to fit to a lognormal distribution).
Thanks for the comment!
I think I’ll just leave the title for now, because it is confusing as it is and I’m not sure if it’s worth it to redo/rewrite the analysis. I should probably have just called it “How to compare the relative effectiveness of development vs. climate interventions”. I’ll make a note in the beginning of the post linking to your guesstimate, saying that you found different results.
I can’t quite follow your analysis from the screenshots (perhaps you could link the models and the assumptions for others). For instance, I’m not sure why the input value of money going to Americans vs. GiveDirectly recipients is 23 to 350.
But generally, I agree that Monte Carlo simulations and minding the distributions can be valuable for better error propagation. Also, I was probably being unclear but my analysis was not supposed to be a confidence intervals but rather my the best guess and extreme scenarios.
For instance, in the cell for cost per tonne of CO2 averted in the pessimistic scenario I intentionally picked the extreme value from the Founder Pledge analysis $0.02 and not their mean value (from the cell note: “A donation to CfRN will avert a tonne of CO2e for $0.12, with a plausible range of $0.02 - $0.72.” https://docs.google.com/spreadsheets/d/12lwvxlWLjwuSuXiciFvnBF2bkfcCkrusdqqT37_QWac/edit#gid=1267972809&range=E35 ).
Echoing what Greg Lewis said about hobbyists modelling the C19 pandemic being perhaps not super productive, I’m also not sure how productive further
empirical work such as this is on the EA forum (I don’t even know how many hits the forum gets generally, and this post in particular, how many climate modellers read it, etc.). I think maybe an org with more research capacity would be better suited to do further analysis on this. Or perhaps one could commission researchers with a background in climate modelling to do this (e.g. the author of this paper might be really qualified to do this: https://www.sciencedirect.com/science/article/pii/S014098831930218X ).
Hey Hauke,
That makes sense.
I do think more EA work on this topic would be useful for someone to do, since I don’t think it’s clear from a near-termist perspective that global health is more effective than climate change.
On guesstimate, there was an error and I was unable to save my model. If someone is looking to reproduce this though, I’d suggest they just make their own.
On the value of money to Americans vs. GiveDirectly recipients, my personal estimate was a lower ratio, because I think we should take into account some flow through effects and I think this causes convergence. I don’t think values like 10,000x are plausible for the all-considered tradeoff (even though the ratio could be 10,000x if we’re just considering the welfare of two individuals). More here: http://reflectivedisequilibrium.blogspot.com/2014/01/what-portion-of-boost-to-global-gdp.html
I’m still a bit unclear how useful these are due to Rob’s point.