As part of GWWC’s Evaluating the Evaluators project, GWWC made in-depth evaluations of the Animal Welfare Fund. The article enumerates a number of advantages of the fund:
I think the advantages you listed make AWF great, but I am currently planning to donate to The Humane League (THL) given AWF’s apparent lack of cost-effectiveness analyses[1]. From Giving What We Can’s evaluation of AWF (emphasis mine):
Fourth, we saw some references to the numbers of animals that could be affected if an intervention went well, but we didn’t see any attempt at back-of-the-envelope calculations to get a rough sense of the cost-effectiveness of a grant, nor any direct comparison across grants to calibrate scoring. We appreciate it won’t be possible to come up with useful quantitative estimates and comparisons in all or even most cases, especially given the limited time fund managers have to review applications, but we think there were cases among the grants we reviewed where this was possible (both quantifying and comparing to a benchmark) — including one case in which the applicant provided a cost-effectiveness analysis themselves, but this wasn’t then considered by the PI in their main reasoning for the grant.
I estimated corporate campaigns for chicken welfare, such as the ones supported by THL, have a cost-effectiveness of 15.0 DALY/​$, i.e. 1.51 k times as cost-effective as GiveWell’s top charities.
Yes, it’s right that we don’t conduct CEAs in all of our evaluations, but they are part of our analysis for some of our grant investigations. GWWC only looked at 10 grant evaluations, so it’s possible they didn’t come across those where we did model BOTEC CEA. With the upcoming increase in the capacity of the fund, we plan to invest more in creating BOTECs for more evaluation. We are hoping to be reevaluated by GWWC so the evaluation reflects the changes we have made and are planning to make in the future.
In the past, we tended to do CEAs more often if: a) The project is relatively well-suited to a back-of-the-envelope calculation b) A back-of-the-envelope calculation seems decision-relevant. At that time, a) and b) seem true in a minority of cases, maybe ~10%-20% of applications depending on the round, to give some rough sense. However, note that there tends to be some difference between projects in areas or by groups we have already evaluated versus projects/​groups/​areas that are newer to us. I’d say newer projects/​groups/​areas are more likely to receive a back-of-the-envelope style estimate.
Even in evaluations where we didn’t explicitly model CEA, we tended to look more at factors that help us judge marginal cost-effectiveness, such as the scale of the problem and potential number of animals affected, whether the work is happening in a country with high production of target species, how neglected it is (to get at the counterfactual impact), the goals of the grant and whether we think the applicant is likely to achieve them given their track record or strength of the plan. We also use and reference more in-depth independent CEAs, like the one on cage-free corporate outreach, shrimp stunning, ballot initiatives or fish stunning while noting that they have limitations and we do not take them at face value.
However, since then, we’ve started conducting BOTEC CEA more frequently and using benchmarking in more of our grant evaluations. For example, we sometimes use this BOTEC template and compare the outcomes to cage-free corporate campaigns (modified for our purposes from a BOTEC that accompanied RP’s Welfare Range Estimates).
For harder-to-quantify grants like movement or capacity building, we would also occasionally model expected outcomes in numerical terms and ask whether this outcome is something we would pay x amount (the expected cost per unit).
We also have a score calibration guide we use when we score grants to make them comparable across grants.
We do not put that much weight in applicant’s CEA as they are impossible to compare to CEAs that use different methodologies and are very sensitive to assumptions that we often cannot verify.
I hope that helps to understand our methodology. Let me know if you have any questions.
Thanks for the update!
I think the advantages you listed make AWF great, but I am currently planning to donate to The Humane League (THL) given AWF’s apparent lack of cost-effectiveness analyses[1]. From Giving What We Can’s evaluation of AWF (emphasis mine):
I estimated corporate campaigns for chicken welfare, such as the ones supported by THL, have a cost-effectiveness of 15.0 DALY/​$, i.e. 1.51 k times as cost-effective as GiveWell’s top charities.
Hey Vasco,
Yes, it’s right that we don’t conduct CEAs in all of our evaluations, but they are part of our analysis for some of our grant investigations. GWWC only looked at 10 grant evaluations, so it’s possible they didn’t come across those where we did model BOTEC CEA. With the upcoming increase in the capacity of the fund, we plan to invest more in creating BOTECs for more evaluation. We are hoping to be reevaluated by GWWC so the evaluation reflects the changes we have made and are planning to make in the future.
In the past, we tended to do CEAs more often if: a) The project is relatively well-suited to a back-of-the-envelope calculation b) A back-of-the-envelope calculation seems decision-relevant. At that time, a) and b) seem true in a minority of cases, maybe ~10%-20% of applications depending on the round, to give some rough sense. However, note that there tends to be some difference between projects in areas or by groups we have already evaluated versus projects/​groups/​areas that are newer to us. I’d say newer projects/​groups/​areas are more likely to receive a back-of-the-envelope style estimate.
Even in evaluations where we didn’t explicitly model CEA, we tended to look more at factors that help us judge marginal cost-effectiveness, such as the scale of the problem and potential number of animals affected, whether the work is happening in a country with high production of target species, how neglected it is (to get at the counterfactual impact), the goals of the grant and whether we think the applicant is likely to achieve them given their track record or strength of the plan. We also use and reference more in-depth independent CEAs, like the one on cage-free corporate outreach, shrimp stunning, ballot initiatives or fish stunning while noting that they have limitations and we do not take them at face value.
However, since then, we’ve started conducting BOTEC CEA more frequently and using benchmarking in more of our grant evaluations. For example, we sometimes use this BOTEC template and compare the outcomes to cage-free corporate campaigns (modified for our purposes from a BOTEC that accompanied RP’s Welfare Range Estimates).
For harder-to-quantify grants like movement or capacity building, we would also occasionally model expected outcomes in numerical terms and ask whether this outcome is something we would pay x amount (the expected cost per unit).
We also have a score calibration guide we use when we score grants to make them comparable across grants.
We do not put that much weight in applicant’s CEA as they are impossible to compare to CEAs that use different methodologies and are very sensitive to assumptions that we often cannot verify.
I hope that helps to understand our methodology. Let me know if you have any questions.