To make an obvious point, as relevant information (including about new charities/causes) will presumably improve a lot over the next 5 years, there seems a case for updating your recommendation annually rather than the donors committing upfront to donating 5 years’ worth to particular charities (if that was the idea).
Depending to some extent on whether a 5-year commitment is essential for the programmes being donated to. If it is, a middle way might be to commit upfront to donating for 5 years subject to the programmes achieving XYZ goals, to be independently assessed each year.
Another obvious point (which you mention of course): the extremely wide range of the TaRL and salt iodization cost-effectiveness figures, from far below 1 (Founders Pledge estimate) to far above, would give me concerns as a donor that these are poorly understood interventions.
Multiyear commitments have a particularly high value to charities, especially when they can be used for operational support. They allow charities to take more risks, act more directly in line with their mission, and spend less time on report writing. They are much rarer in the sector.
I think the donors do indeed intend to commit for 5 years, for the reason tomwein invokes. But of course if new evidence suggests an intervention really isn’t having the impact that we expected, or something else that seems much more promising comes along, presumably they could still revisit their commitment on an annual basis.
Regarding TaRL, the intervention has been studied extensively. The main uncertainty is whether and to what extent gains in test scores translate to long-term outcomes like higher income. But since the donors also care about improvements in learning outcomes per se, there is a bit of a hedge here. It just isn’t captured in the cost-effectiveness analyses, which only incorporate effects on income.
One thing to note about the bounds of the FP cost-effectiveness estimate is that they aren’t equivalent to a 95% confidence interval. Instead they’ve been calculated by multiplying through the most extreme plausible values for each variable on our cost-effectiveness calculation. This means they correspond to an absolute, unimaginably bad worst case scenario and an absolute, unfathomably good best case scenario. We understand that this is far from ideal: first, cost-effectiveness estimates that span 6+ orders of magnitude aren’t that helpful for cause prioritization; second, they probably overrepresent our actual uncertainty.
On TaRL specifically, the effects seem really good—whether or not we can get governments to implement TaRL effectively seems to be where most of the uncertainty lies.
@smclare Thanks for giving some background on the Founders Pledge cost-effectiveness scenarios. For TaRL, I’m surprised that you say that the optimistic scenario is the unfathomably best case scenario. Even in that scenario, impacts are assumed to last 20 years, and the impact of test scores improvements on earnings does not use the most optimistic cases mentioned in the Founders Pledge education report. It seems fathomable impacts could last a whole career (say 40 years). As you can see from my cost-effectiveness estimates for TaRL, my unfathomably best case scenario is significantly more optimistic than the one from Founders Pledge (I included in the cost-effectiveness spreadsheet a worksheet using the worksheet from Founders Pledge as a starting point, but with my own scenarios in there). And in both cases, we only include the impact on income. It seems quite plausible that education would have impacts beyond that which aren’t taken into account.
Looks good from a quick read-through.
To make an obvious point, as relevant information (including about new charities/causes) will presumably improve a lot over the next 5 years, there seems a case for updating your recommendation annually rather than the donors committing upfront to donating 5 years’ worth to particular charities (if that was the idea).
Depending to some extent on whether a 5-year commitment is essential for the programmes being donated to. If it is, a middle way might be to commit upfront to donating for 5 years subject to the programmes achieving XYZ goals, to be independently assessed each year.
Another obvious point (which you mention of course): the extremely wide range of the TaRL and salt iodization cost-effectiveness figures, from far below 1 (Founders Pledge estimate) to far above, would give me concerns as a donor that these are poorly understood interventions.
Multiyear commitments have a particularly high value to charities, especially when they can be used for operational support. They allow charities to take more risks, act more directly in line with their mission, and spend less time on report writing. They are much rarer in the sector.
I think the donors do indeed intend to commit for 5 years, for the reason tomwein invokes. But of course if new evidence suggests an intervention really isn’t having the impact that we expected, or something else that seems much more promising comes along, presumably they could still revisit their commitment on an annual basis.
Regarding TaRL, the intervention has been studied extensively. The main uncertainty is whether and to what extent gains in test scores translate to long-term outcomes like higher income. But since the donors also care about improvements in learning outcomes per se, there is a bit of a hedge here. It just isn’t captured in the cost-effectiveness analyses, which only incorporate effects on income.
One thing to note about the bounds of the FP cost-effectiveness estimate is that they aren’t equivalent to a 95% confidence interval. Instead they’ve been calculated by multiplying through the most extreme plausible values for each variable on our cost-effectiveness calculation. This means they correspond to an absolute, unimaginably bad worst case scenario and an absolute, unfathomably good best case scenario. We understand that this is far from ideal: first, cost-effectiveness estimates that span 6+ orders of magnitude aren’t that helpful for cause prioritization; second, they probably overrepresent our actual uncertainty.
On TaRL specifically, the effects seem really good—whether or not we can get governments to implement TaRL effectively seems to be where most of the uncertainty lies.
@smclare Thanks for giving some background on the Founders Pledge cost-effectiveness scenarios. For TaRL, I’m surprised that you say that the optimistic scenario is the unfathomably best case scenario. Even in that scenario, impacts are assumed to last 20 years, and the impact of test scores improvements on earnings does not use the most optimistic cases mentioned in the Founders Pledge education report. It seems fathomable impacts could last a whole career (say 40 years). As you can see from my cost-effectiveness estimates for TaRL, my unfathomably best case scenario is significantly more optimistic than the one from Founders Pledge (I included in the cost-effectiveness spreadsheet a worksheet using the worksheet from Founders Pledge as a starting point, but with my own scenarios in there). And in both cases, we only include the impact on income. It seems quite plausible that education would have impacts beyond that which aren’t taken into account.