Thanks for sharing this analysis (and the broader project)!
Given the lengthy section on model limitations, I would have liked to have seen a discussion of sensitivity to assumptions. The one that stood out to me was the estimate for the value of a GWWC Pledge, which serves as a basis for all your calcs. While it certainly seems reasonable to use their estimate as a baseline, there’s inherently a lot of uncertainty in estimating a multi-decade donation stream and adjusting for counter-factuals, time discounting, and attrition.
FWIW, I’m pretty dubious about the treatment of plan changes scored 10. The model implies each of those plan changes is worth >$500k (again, adjusted for counterfactuals, time discounting, and attrition), which is an extremely high hurdle to meet. If a university student tells me they’re going to “become a major advocate of effective causes” (sufficient for a score of 10), I wouldn’t think that has the same expected value as a half million dollars given to AMF today.
I would have liked to have seen a discussion of sensitivity to assumptions.
I agree—I think, however, you can justify the cost-effectiveness of 80k in multiple, semi-independent ways, which help to make the argument more robust:
FWIW, I’m pretty dubious about the treatment of plan changes scored 10. The model implies each of those plan changes is worth >$500k...If a university student tells me they’re going to “become a major advocate of effective causes” (sufficient for a score of 10), I wouldn’t think that has the same expected value as a half million dollars given to AMF today.
Yes, we only weigh them at 10, rather than 40. However, here are some reasons the 500k figure might not be out of the question.
You only need a small number of outliers to pull up the mean a great deal.
Less extremely, some of the 10s are likely to donate millions to charity within the next few years.
Second, most of the 10s are focused on xrisk and meta-charity. Personally, I think efforts in these causes are likely at least 5-fold more cost-effective than AMF, so they’d only need to donate a 100k to have as much impact as 500k to AMF.
Fair point about outliers driving the mean. Does suggest that a cost-effectiveness estimate should just try to quantify those outliers directly instead of going through a translation. E.g. if “some of the 10s are likely to donate millions to charity within the next few years”, just estimate the value of that rather than assuming that giving will on average equal 10x GWWC’s estimate for the value of a pledge.
Does suggest that a cost-effectiveness estimate should just try to quantify those outliers directly instead of going through a translation.
Yes, that’s the main way I think about our impact. But I think you can also justify it on the basis of getting lots of people make moderate changes, so I think it’s useful to consider both approaches.
Thanks for sharing this analysis (and the broader project)!
Given the lengthy section on model limitations, I would have liked to have seen a discussion of sensitivity to assumptions. The one that stood out to me was the estimate for the value of a GWWC Pledge, which serves as a basis for all your calcs. While it certainly seems reasonable to use their estimate as a baseline, there’s inherently a lot of uncertainty in estimating a multi-decade donation stream and adjusting for counter-factuals, time discounting, and attrition.
FWIW, I’m pretty dubious about the treatment of plan changes scored 10. The model implies each of those plan changes is worth >$500k (again, adjusted for counterfactuals, time discounting, and attrition), which is an extremely high hurdle to meet. If a university student tells me they’re going to “become a major advocate of effective causes” (sufficient for a score of 10), I wouldn’t think that has the same expected value as a half million dollars given to AMF today.
Hi Jon,
I agree—I think, however, you can justify the cost-effectiveness of 80k in multiple, semi-independent ways, which help to make the argument more robust:
https://80000hours.org/2016/12/has-80000-hours-justified-its-costs/
Yes, we only weigh them at 10, rather than 40. However, here are some reasons the 500k figure might not be out of the question.
First, we care about the mean value, not the median or threshold. Although some of the 10s will probably have less impact than 500k to AMF now, some of them could have far more. For instance, there’s reason to think GPP might have had impact equivalent to over $100m given to AMF. https://80000hours.org/2016/12/has-80000-hours-justified-its-costs/#global-priorities-project
You only need a small number of outliers to pull up the mean a great deal.
Less extremely, some of the 10s are likely to donate millions to charity within the next few years.
Second, most of the 10s are focused on xrisk and meta-charity. Personally, I think efforts in these causes are likely at least 5-fold more cost-effective than AMF, so they’d only need to donate a 100k to have as much impact as 500k to AMF.
Fair point about outliers driving the mean. Does suggest that a cost-effectiveness estimate should just try to quantify those outliers directly instead of going through a translation.
E.g. if “some of the 10s are likely to donate millions to charity within the next few years”, just estimate the value of that rather than assuming that giving will on average equal 10x GWWC’s estimate for the value of a pledge.
Yes, that’s the main way I think about our impact. But I think you can also justify it on the basis of getting lots of people make moderate changes, so I think it’s useful to consider both approaches.