Great analysis, Isaac! I worry the Animal Welfare Fund (AWF) has similar problems (see below), but they are way less transparent than ACE about their evaluations, and therefore much less scrutable. Instead of mostly deferring to AWF, I would rather have donors look over ACE’s evaluations, discuss their findings with others, and eventually publish them online, even if they spend much less time on these activities than you did.
AWF only runs cost-effectiveness analysis (CEAs) for a minority of applications. According to a comment by Karolina Sarek, AWF’s chair, on June 28 (this year):
In the past, we tended to do CEAs more often if: a) The project is relatively well-suited to a back-of-the-envelope calculation b) A back-of-the-envelope calculation seems decision-relevant. At that time, a) and b) seem true in a minority of cases, maybe ~10%-20% of applications depending on the round, to give some rough sense. However, note that there tends to be some difference between projects in areas or by groups we have already evaluated versus projects/groups/areas that are newer to us. I’d say newer projects/groups/areas are more likely to receive a back-of-the-envelope style estimate.
Comparisons across grants also seem to be lacking. From Giving What We Can’s (GWWC’s) evaluation of AWF in November 2023 (emphasis mine):
Fourth, we saw some references to the numbers of animals that could be affected if an intervention went well, but we didn’t see any attempt at back-of-the-envelope calculations to get a rough sense of the cost-effectiveness of a grant, nor any direct comparison across grants to calibrate scoring. We appreciate it won’t be possible to come up with useful quantitative estimates and comparisons in all or even most cases, especially given the limited time fund managers have to review applications, but we think there were cases among the grants we reviewed where this was possible (both quantifying and comparing to a benchmark) — including one case in which the applicant provided a cost-effectiveness analysis themselves, but this wasn’t then considered by the PI in their main reasoning for the grant.
GWWC looked into 10 applications:
Of the 10 grant investigation reports we reviewed, three were provided by the AWF upon our general request for representative grants; two were selected by us from their grants database; two were selected by the AWF after we provided specifications; and three were selected by the AWF based on our request for grant applications by organisations that applied to both the AWF and ACE’s MG.
Karolina also said on June 28 that AWF has improved their methodology since GWWC’s evaluation:
However, since then, we’ve started conducting BOTEC CEA more frequently and using benchmarking in more of our grant evaluations. For example, we sometimes use this BOTEC template and compare the outcomes to cage-free corporate campaigns (modified for our purposes from a BOTEC that accompanied RP’s Welfare Range Estimates).
I do not doubt AWF has taken the above steps, but I have no way to check it. I think donating to ACE over AWF is a good way of incentivising transparency, which ultimately can lead to more impact.
Hey Vasco! I agree that AWF should be more transparent, and since I started working on it full-time, we have more capacity for that, and we are planning to communicate about our work more proactively.
In light of that, we just published a post summarizing how 2024 went, what changes we recently introduced, and what we are planning. We touched on updates to our evaluation process as well. Here is the relevant section from that post:
“Grant investigations: Updated grant evaluation framework: We’ve updated our systematic review process, enabling us to evaluate every application using standardized templates that vary based on the required depth of investigation. This framework ensures a thorough assessment of key factors while maintaining flexibility for grant-specific considerations. For example, for the deep evaluations, (which are the vast majority of all evaluations), key evaluation areas include assessment of the project’s Theory of Change, scale of counterfactual impact, likelihood of success, back-of-the-envelope cost-effectiveness and benchmarking, and the expected value of receiving funding. It also includes forecasting grant outcomes. You can read more about our process in the FAQ. Introduced new decision procedures for marginal grants: We introduced an additional step in our evaluation that enables us to make better decisions about grants that are just below or just above our funding bar. Since AWF gives grants on a rolling basis rather than in rounds, it is important to have a process for this to ensure decisions are consistent.”
We also slightly updated our website and added a new question to the FAQ—I’m copying that below:
“How Does the EA Animal Welfare Fund Make Grant Decisions?
Our grantmaking process consists of the following stages:
Stage 1: Application Processing. When we receive an application, it’s entered into our project management system along with the complete application details, history of previous applications from the applicant, evaluation rubrics, investigator assignments, and other relevant documentation.
Stage 2: Initial Screening. We conduct a quick scope check to ensure applications align with our fund’s mission and show potential for high impact. About 30% of applications are filtered out at this stage, typically because they fall outside our scope or don’t demonstrate sufficient impact potential.
Stage 3: Selecting Primary Grant Investigator and Depth of the Evaluation. For applications that pass the initial screening, we assign investigators who are most suitable for a given evaluation. Based on various heuristics, such as the size of the grant, uncertainty, and potential risk, the Fund’s Chair also determines the depth of the evaluation.
Stage 4: In-Depth Evaluation. Every grant application undergoes a systematic review. For each level of depth of investigation required, AWF has an evaluation template that fund managers follow. The framework balances ensuring that all key factors have been considered and that evaluations are consistent, while leaving space for additional, grant-specific crucial considerations. For the deep evaluations, (which are the vast majority of all evaluations), the primary investigator typically examines:
Theory of Change (ToC) - examining how activities translate into improvements for animals and whether the evidence supports its merits
Scale of counterfactual impact—assessing the problem’s scale, neglectedness, and strategic importance
Likelihood of success—evaluating track record, team competence, and concrete plans
Cost-effectiveness and benchmarking- conducting calculations to estimate impact per dollar and compare it to relevant benchmarks
Value of funding—analyzing counterfactuals and long-term sustainability
Forecasting—forecasting the probability that the project will succeed or fail and due to what reasons (validity of the ToC or performance in achieving planned outcomes )
In the case of evaluations that require the maximum level of depth, a secondary investigator critically reviews the completed write-up, raises additional questions and concerns, and provides alternative perspectives or recommendations.
Stage 5: Collective Review and Voting. After the evaluation, each application undergoes a thorough collective assessment. The Fund Chair and at least two Fund Managers review the analysis. All Fund Managers without conflicts of interest can contribute additional insights and discuss key questions through dedicated channels. Finally, each Fund Manager assigns a score, which helps us systematically compare the most promising grants.
Stage 6: Final Recommendation Looking at the average score, the Fund Chair approves grants that are clearly above our funding bar and rejects those clearly below it. For grants near our funding threshold, we conduct another step where all found managers compare those marginal grants against each other to select the strongest proposals.
Once decisions are finalized, approved grants move to our grants team for contracting and reporting setup.
Throughout this process, we maintain detailed documentation and apply consistent standards to ensure we select the most promising opportunities to help animals most effectively.”
Great analysis, Isaac! I worry the Animal Welfare Fund (AWF) has similar problems (see below), but they are way less transparent than ACE about their evaluations, and therefore much less scrutable. Instead of mostly deferring to AWF, I would rather have donors look over ACE’s evaluations, discuss their findings with others, and eventually publish them online, even if they spend much less time on these activities than you did.
AWF only runs cost-effectiveness analysis (CEAs) for a minority of applications. According to a comment by Karolina Sarek, AWF’s chair, on June 28 (this year):
Comparisons across grants also seem to be lacking. From Giving What We Can’s (GWWC’s) evaluation of AWF in November 2023 (emphasis mine):
GWWC looked into 10 applications:
Karolina also said on June 28 that AWF has improved their methodology since GWWC’s evaluation:
I do not doubt AWF has taken the above steps, but I have no way to check it. I think donating to ACE over AWF is a good way of incentivising transparency, which ultimately can lead to more impact.
Hey Vasco! I agree that AWF should be more transparent, and since I started working on it full-time, we have more capacity for that, and we are planning to communicate about our work more proactively.
In light of that, we just published a post summarizing how 2024 went, what changes we recently introduced, and what we are planning. We touched on updates to our evaluation process as well. Here is the relevant section from that post:
“Grant investigations:
Updated grant evaluation framework: We’ve updated our systematic review process, enabling us to evaluate every application using standardized templates that vary based on the required depth of investigation. This framework ensures a thorough assessment of key factors while maintaining flexibility for grant-specific considerations. For example, for the deep evaluations, (which are the vast majority of all evaluations), key evaluation areas include assessment of the project’s Theory of Change, scale of counterfactual impact, likelihood of success, back-of-the-envelope cost-effectiveness and benchmarking, and the expected value of receiving funding. It also includes forecasting grant outcomes. You can read more about our process in the FAQ.
Introduced new decision procedures for marginal grants: We introduced an additional step in our evaluation that enables us to make better decisions about grants that are just below or just above our funding bar. Since AWF gives grants on a rolling basis rather than in rounds, it is important to have a process for this to ensure decisions are consistent.”
We also slightly updated our website and added a new question to the FAQ—I’m copying that below:
“How Does the EA Animal Welfare Fund Make Grant Decisions?
Our grantmaking process consists of the following stages:
Stage 1: Application Processing. When we receive an application, it’s entered into our project management system along with the complete application details, history of previous applications from the applicant, evaluation rubrics, investigator assignments, and other relevant documentation.
Stage 2: Initial Screening. We conduct a quick scope check to ensure applications align with our fund’s mission and show potential for high impact. About 30% of applications are filtered out at this stage, typically because they fall outside our scope or don’t demonstrate sufficient impact potential.
Stage 3: Selecting Primary Grant Investigator and Depth of the Evaluation. For applications that pass the initial screening, we assign investigators who are most suitable for a given evaluation. Based on various heuristics, such as the size of the grant, uncertainty, and potential risk, the Fund’s Chair also determines the depth of the evaluation.
Stage 4: In-Depth Evaluation. Every grant application undergoes a systematic review. For each level of depth of investigation required, AWF has an evaluation template that fund managers follow. The framework balances ensuring that all key factors have been considered and that evaluations are consistent, while leaving space for additional, grant-specific crucial considerations. For the deep evaluations, (which are the vast majority of all evaluations), the primary investigator typically examines:
Theory of Change (ToC) - examining how activities translate into improvements for animals and whether the evidence supports its merits
Scale of counterfactual impact—assessing the problem’s scale, neglectedness, and strategic importance
Likelihood of success—evaluating track record, team competence, and concrete plans
Cost-effectiveness and benchmarking- conducting calculations to estimate impact per dollar and compare it to relevant benchmarks
Value of funding—analyzing counterfactuals and long-term sustainability
Forecasting—forecasting the probability that the project will succeed or fail and due to what reasons (validity of the ToC or performance in achieving planned outcomes )
In the case of evaluations that require the maximum level of depth, a secondary investigator critically reviews the completed write-up, raises additional questions and concerns, and provides alternative perspectives or recommendations.
Stage 5: Collective Review and Voting. After the evaluation, each application undergoes a thorough collective assessment. The Fund Chair and at least two Fund Managers review the analysis. All Fund Managers without conflicts of interest can contribute additional insights and discuss key questions through dedicated channels. Finally, each Fund Manager assigns a score, which helps us systematically compare the most promising grants.
Stage 6: Final Recommendation Looking at the average score, the Fund Chair approves grants that are clearly above our funding bar and rejects those clearly below it. For grants near our funding threshold, we conduct another step where all found managers compare those marginal grants against each other to select the strongest proposals.
Once decisions are finalized, approved grants move to our grants team for contracting and reporting setup.
Throughout this process, we maintain detailed documentation and apply consistent standards to ensure we select the most promising opportunities to help animals most effectively.”
Thanks, Karolina! Great updates.