Founders Pledge saying they can offset a ton of CO2 for $0.1-1 is like a malaria net charity saying they can save a life for $5.
Both are off by at least an order of magnitude. You should expect to spend at least $100/ton for robust, verifiable offsets. That brings your offset cost to $3,500 not $35.
In their impact report they say “We’ve granted out $14.89m in total and we estimate that it will avert 102m tonnes in CO2-equivalent emissions.”
I would not give too much credence to that from a non-EA aligned org, but I’ve been giving them decent credence with regards to counterfactual impact reporting since they’re EA aligned.
You’re saying I should treat their reports less like givewell reports, and more like I would treat a random non EA charity. Any particular arguments for why? Or is it just that you wouldn’t take the prior of assuming that they are at the evaluation quality of givewell? (Or maybe you don’t trust givewell on this either)
GiveWell has dozens of researchers putting tens of thousands of hours of work into coming up with better models and variable estimates. Their most critical inputs are largely determined by RCTs, and they are constantly working to get better data. A lot of their uncertainty comes from differences in moral weights in saving vs. improving lives.
Founders Pledge makes models using monte carlo simulations on complex theory of change models where the variables ranges are made up because they are largely unknowable. It’s mostly Johannes, with a few assistant researchers, putting in a few hundreds of hours into model choice and parameter selection—with many more hours spent on writing and coding for their monte carlo analysis (which Givewell doesn’t have to do, because they’ve got much simpler impact models in spreadsheets). FP has previously made 1/mtCO2e cost-effectiveness claims based on models like this, which was amplified in MacAskill’s WWOTF. This model is wildly optimistic. FP now disowns that particularly model, but won’t take it down or publicly list it as a mistake. They no longer publish their particular intervention CEAs publicly, though they may resume soon. My biggest criticism is that when making these complex theory-of-change models, the structure of model often matters more than than the variable inputs. While FP tries to pick “conservative” variable value assumptions (they rarely are), the model structure is wildly optimistic for their chosen interventions (generally technology innovation policy). For model feedback, FP doesn’t have a good culture or process in place that deals with criticism well, a complaint that I’ve heard from several in the EA climate space. I think FP’s uncertainty work has promise as a tool, but I think the recommendations they come up with are largely wrong given their chosen model structure and inputs.
GiveWell’s recommendations in the health space are of vastly higher quality and certainty than FP’s in the climate space.
Uncertainty As we frequently point out, one should take the estimates with a grain of salt and consider the reported uncertainty (e.g. the old estimate had something like 0.1 USD/tCO2e to 10 USD/tCO2e) and, IIRC, the impact report also reports that these estimates are extremely uncertain and reports wide ranges.
As we discussed in our recent methodology-focused update, we think large uncertainty is unavoidable when operating in climate as a global decadal challenge with the most effective interventions inherently non-RCT-able (FWIW, I would think similarly about Global Health and Development, which is why I think the certainty focus of (historical) GiveWell can be harmful when the goal is to risk-neutrally identify the best interventions), so our main focus is on getting the relative comparisons right which is what is decision-relevant.
Offsets provide no information with regards to the cost-effectiveness of risk-neutral philanthropy That said, using offsets to make the case that our estimates must be overly optimistic, seems mistaken.
Offsets solve a different problem, high certainty (uncertainty-avoidant) emissions reductions from direct action. This cannot be very cheap.
It is very plausible that risk-neutrality and leveraging mechanisms such as advocacy, trajectory changes, and others provides a large multiplier upon offsets at the cost of more uncertainty (though that uncertainty cuts both ways, uncertain things can also turn out to be even better than their expected value suggests). FWIW, Giving Green who used to be critical of this claim has also converged onto this position, now emphasizing philanthropic bets over offsets quite explicitly and much more confidently than they used to.
Just as OP thinks that their risk-neutral global health and development works dominates GiveWell charities despite being more uncertain about their own work, it is entirely plausible that credible offsets cost > USD 100t/CO2e while good philanthropic opportunities dominate this by ~100x or more. In other words, being uncertainty-avoidant has real costs in terms of expected impact, so offsets do not provide a credible benchmark from which to infer whether estimates on risk-neutral philanthropy are off.
Should we encourage offsetting? I should also note that I am quite critical of offsetting as a frame, one needs to be quite careful to not create / amplify a frame of very limited moral ambition (I think you did a good job in your post, though it is still further in terms of promoting offsetting than I’d go). I generally try to frame donating to climate charities as a form of political action rather than offsetting.
Founders Pledge saying they can offset a ton of CO2 for $0.1-1 is like a malaria net charity saying they can save a life for $5.
Both are off by at least an order of magnitude. You should expect to spend at least $100/ton for robust, verifiable offsets. That brings your offset cost to $3,500 not $35.
Thanks—maybe I’m giving them too much trust.
In their impact report they say “We’ve granted out $14.89m in total and we estimate that it will avert 102m tonnes in CO2-equivalent emissions.”
I would not give too much credence to that from a non-EA aligned org, but I’ve been giving them decent credence with regards to counterfactual impact reporting since they’re EA aligned.
You’re saying I should treat their reports less like givewell reports, and more like I would treat a random non EA charity. Any particular arguments for why? Or is it just that you wouldn’t take the prior of assuming that they are at the evaluation quality of givewell? (Or maybe you don’t trust givewell on this either)
GiveWell has dozens of researchers putting tens of thousands of hours of work into coming up with better models and variable estimates. Their most critical inputs are largely determined by RCTs, and they are constantly working to get better data. A lot of their uncertainty comes from differences in moral weights in saving vs. improving lives.
Founders Pledge makes models using monte carlo simulations on complex theory of change models where the variables ranges are made up because they are largely unknowable. It’s mostly Johannes, with a few assistant researchers, putting in a few hundreds of hours into model choice and parameter selection—with many more hours spent on writing and coding for their monte carlo analysis (which Givewell doesn’t have to do, because they’ve got much simpler impact models in spreadsheets). FP has previously made 1/mtCO2e cost-effectiveness claims based on models like this, which was amplified in MacAskill’s WWOTF. This model is wildly optimistic. FP now disowns that particularly model, but won’t take it down or publicly list it as a mistake. They no longer publish their particular intervention CEAs publicly, though they may resume soon. My biggest criticism is that when making these complex theory-of-change models, the structure of model often matters more than than the variable inputs. While FP tries to pick “conservative” variable value assumptions (they rarely are), the model structure is wildly optimistic for their chosen interventions (generally technology innovation policy). For model feedback, FP doesn’t have a good culture or process in place that deals with criticism well, a complaint that I’ve heard from several in the EA climate space. I think FP’s uncertainty work has promise as a tool, but I think the recommendations they come up with are largely wrong given their chosen model structure and inputs.
GiveWell’s recommendations in the health space are of vastly higher quality and certainty than FP’s in the climate space.
Thanks, Luke!
Uncertainty
As we frequently point out, one should take the estimates with a grain of salt and consider the reported uncertainty (e.g. the old estimate had something like 0.1 USD/tCO2e to 10 USD/tCO2e) and, IIRC, the impact report also reports that these estimates are extremely uncertain and reports wide ranges.
As we discussed in our recent methodology-focused update, we think large uncertainty is unavoidable when operating in climate as a global decadal challenge with the most effective interventions inherently non-RCT-able (FWIW, I would think similarly about Global Health and Development, which is why I think the certainty focus of (historical) GiveWell can be harmful when the goal is to risk-neutrally identify the best interventions), so our main focus is on getting the relative comparisons right which is what is decision-relevant.
Offsets provide no information with regards to the cost-effectiveness of risk-neutral philanthropy
That said, using offsets to make the case that our estimates must be overly optimistic, seems mistaken.
Offsets solve a different problem, high certainty (uncertainty-avoidant) emissions reductions from direct action. This cannot be very cheap.
It is very plausible that risk-neutrality and leveraging mechanisms such as advocacy, trajectory changes, and others provides a large multiplier upon offsets at the cost of more uncertainty (though that uncertainty cuts both ways, uncertain things can also turn out to be even better than their expected value suggests). FWIW, Giving Green who used to be critical of this claim has also converged onto this position, now emphasizing philanthropic bets over offsets quite explicitly and much more confidently than they used to.
Just as OP thinks that their risk-neutral global health and development works dominates GiveWell charities despite being more uncertain about their own work, it is entirely plausible that credible offsets cost > USD 100t/CO2e while good philanthropic opportunities dominate this by ~100x or more. In other words, being uncertainty-avoidant has real costs in terms of expected impact, so offsets do not provide a credible benchmark from which to infer whether estimates on risk-neutral philanthropy are off.
Should we encourage offsetting?
I should also note that I am quite critical of offsetting as a frame, one needs to be quite careful to not create / amplify a frame of very limited moral ambition (I think you did a good job in your post, though it is still further in terms of promoting offsetting than I’d go). I generally try to frame donating to climate charities as a form of political action rather than offsetting.