I think we can run into problems when we attempt to transfer cost-effectiveness analyses that were sound enough to answer “where should I donate?” into the harder question of “how much do I need to give to offset”? As you point out, assigning ~100% of the counterfactual good to the donor is . . . at a minimum, generous.
When we are asking where to donate, that often isn’t a major problem. For example, if my goal is to save lives, I can often assume that errors in assigning “moral credit” will be roughly equal across (at least) GiveWell-style charities like AMF. Because the error term is similar for all giving opportunities, we can usually ignore it because it shouldn’t change the relative ranking of the giving opportunities unless they are fairly close.
But offset situations pose a different question—we are looking to morally claim a certain quantum of good to counterbalance the not-good we are producing elsewhere. That means we need an absolute measure (or at least estimate) of that quantum. As a result, if we want to find the minimum amount necessary to offset, we necessarily must make judgments about distributing the moral credit available.
Some people might also want a confidence interval for their offsetting action—e.g., “I want to be 99% confident that I am giving enough to actually offset my production of not-goods.” This is likely impossible with some interventions. For instance, if I think there is a greater than 1% chance that the critics are correct that corporate campaigns are net-negative in the long run, then my 99% confidence interval will always include negative values.
Someone who wants confidence in actual offset—rather than offset in expectancy—would logically seek “safer” donation opportunities. These would generally have more certain impact and low spread of potential impacts. Perhaps a bundle of interventions could achieve the necessary confidence interval (such as 3 programs with an 80% chance of success and no appreciable risk of being net harmful, or a larger number at lower success probabilities).
Thanks for posting this!
I think we can run into problems when we attempt to transfer cost-effectiveness analyses that were sound enough to answer “where should I donate?” into the harder question of “how much do I need to give to offset”? As you point out, assigning ~100% of the counterfactual good to the donor is . . . at a minimum, generous.
When we are asking where to donate, that often isn’t a major problem. For example, if my goal is to save lives, I can often assume that errors in assigning “moral credit” will be roughly equal across (at least) GiveWell-style charities like AMF. Because the error term is similar for all giving opportunities, we can usually ignore it because it shouldn’t change the relative ranking of the giving opportunities unless they are fairly close.
But offset situations pose a different question—we are looking to morally claim a certain quantum of good to counterbalance the not-good we are producing elsewhere. That means we need an absolute measure (or at least estimate) of that quantum. As a result, if we want to find the minimum amount necessary to offset, we necessarily must make judgments about distributing the moral credit available.
Some people might also want a confidence interval for their offsetting action—e.g., “I want to be 99% confident that I am giving enough to actually offset my production of not-goods.” This is likely impossible with some interventions. For instance, if I think there is a greater than 1% chance that the critics are correct that corporate campaigns are net-negative in the long run, then my 99% confidence interval will always include negative values.
Someone who wants confidence in actual offset—rather than offset in expectancy—would logically seek “safer” donation opportunities. These would generally have more certain impact and low spread of potential impacts. Perhaps a bundle of interventions could achieve the necessary confidence interval (such as 3 programs with an 80% chance of success and no appreciable risk of being net harmful, or a larger number at lower success probabilities).