As noted in GWWC’s report, our reasoning for recommending ÇHKD is that we think they’re very plausibly competitive with our other recommended charities, such as Sinergia. Sinergia’s CEA rested on more high uncertainty assumptions than ÇHKD’s did, and their CEA covered a smaller percentage of their work. We think it’s reasonable to support both a charity that we are more certain is highly cost-effective (such as ÇHKD) as well as one that we are more uncertain is extremely cost-effective (such as Sinergia). We also think ÇHKD may have more potential to have increased cost-effectiveness in the future, based on their recent shift to focus attention on winning commitments from larger retailers.
There are a few things we’d like to note when it comes to SWP and ALI:
They were evaluated in different years (SWP in 2023 and ALI in 2024) with different methodologies for assessing cost-effectiveness. In 2023, we assessed cost-effectiveness using weighted factor models that consider achievement quantity and quality, whereas in 2024 we switched to back-of-the-envelope calculations of impact per dollar. Because of this, there was no direct comparison between the shrimp stunning programs at SWP and ALI. However, the next time we evaluate SWP we expect to create an impact per dollar estimate, in which case the estimates you’ve created (including differentiating slaughter via ice slurry vs asphyxiation) will come in handy.
ALI’s shrimp work only accounts for ~38% of their overall expenditure, and we had strong reasons to recommend them for their other work (policy outreach, the Aquatic Animal Alliance, etc.).
While ACE values plurality, we don’t take a “best-in-class” approach and wouldn’t rule out recommending multiple charities doing similar work.
We think it’s reasonable to support both a charity that we are more certain is highly cost-effective (such as ÇHKD) as well as one that we are more uncertain is extremely cost-effective (such as Sinergia).
Your CEAs suggest the cost-effectiveness of ÇHKD is slightly more uncertain than that of Sinergia, which is in tension with the above. Your upper bound for the cost-effectiveness of:
Sinergia is 9.45 (= 2.05*10^3/217) times your lower bound.
In addition, your lower bound for the cost-effectiveness of Sinergia is 1.87 (= 217⁄116) times your upper bound for the cost-effectiveness of ÇHKD, which again points towards only Sinergia being recommended.
their [Sinergia’s] CEA covered a smaller percentage of their work.
I think this can indeed be important. I estimated Sinergia Animal’s meal replacement program in 2023 was 0.107 % as cost-effective as their cage-free campaigns. So I would say that x % of their marginal funding going towards their meal replacement program would decrease their marginal cost-effectiveness by around x %. I think your CEAs should ideally refer to the expected additional funding caused by ACE’s recommendations, not a fraction of the organisations’ past work. GWWC’s evaluation argued for this too if I recall correctly.
We think it’s reasonable to support both a charity that we are more certain is highly cost-effective (such as ÇHKD) as well as one that we are more uncertain is extremely cost-effective (such as Sinergia).
Even if the organisation whose cost-effectiveness is more certain is way less cost-effective in expectation? If so, I encourage you to disclaim your recommendations are risk averse (as GiveWell does with respect to their Top Charities Fund), and clarify how much.
While ACE values plurality, we don’t take a “best-in-class” approach and wouldn’t rule out recommending multiple charities doing similar work.
Would you still recommend many organisations doing similar work if you thought their cost-effectiveness differed significantly? I would drop a recommendation whenever the reduction in impact linked to the recommended organisation receiving less funds was exceeded by the increase in impact linked to other organisations receiving more funds. For example, if you thought recommendation A was 10 % as cost-effective at the margin as recommendation B, and that dropping recommendation A would decrease the funds of A by 100 k$, increase the funds of B by 50 k$, increase the funds of roughly neutral (non-recommended) organisations by 40 k$, increase donations to your movement grants’ fund by 10 k$, and believed this fund was 2 times as cost-effective at the marfin as recommendation B, dropping recommendation A would be as good as directing 60 k$ (= (-100*0.1 + 50 + 40*0 + 10*2)*10^3) to B. In this case, it would be worth dropping recommendation A. Have you considered reasoning along these lines to decide on whether to make a recommendation or not? I understand there is lots of uncertainty about comparisons between the marginal cost-effectiveness of organisations, and how dropping or adding a recommendation would influence the funding of your recommendations. However, you are already making judgements about these implicitly. I think being explicit about your assumptions would help you clarify them, and improve them in the future, thus eventually leading to better decisions.
We agree that the majority of our analysis should focus on the future work that would be enabled by ACE’s recommendation. However, forward-looking CEAs are inherently more subjective because they rely on projected metrics rather than actual past results. For this reason, we tend to create backward-looking CEAs and then assess whether there are any reasons to expect diminishing returns in the next two years (the duration of an ACE recommendation). When GWWC shared with us anonymized comments from the experts they consulted on this topic, the comments acknowledged these limitations of forward-looking CEAs. However, we also think there are cases where forward-looking CEAs can be helpful despite these limitations, for example when charities are planning new programs that are not currently funded.
We do not recommend charities if there is a large enough gap between their expected marginal cost-effectiveness and that of our other charities, and we do use the framing that you suggest when considering adding the next marginal charity. However, since we are unable to always fully quantify the impact on animals of charities’ work, this is partially based on qualitative arguments and judgments, so our decisions may not always appear consistent with the results of our CEAs.
In general, we quantify uncertainty within our CEA assessments and we also qualitatively assess the risk of each program. Additionally, we screen out applicants whose work is “too” uncertain based on their track record and whether or not the charities themselves are uncertain about where future funding would go. Our Movement Grants program does not have these bars and is willing to fund newer and more exploratory programs. However, we do agree that it’d be worthwhile to be clearer about how we weigh different types of risk in our decision-making, and we’ll consider adding this to our communications.
We do not recommend charities if there is a large enough gap between their expected marginal cost-effectiveness and that of our other charities
Your lower bound for the cost-effectiveness of Sinergia is 1.87 (= 217/116) times your upper bound for the cost-effectiveness of ÇHKD, which again points towards only Sinergia being recommended.
For this reason, we tend to create backward-looking CEAs and then assess whether there are any reasons to expect diminishing returns in the next two years (the duration of an ACE recommendation).
Makes sense. I very much agree the CEAs of past work are valuable. However, I suspect it would be good to be more quantitative/explicit about how that is used to inform your views about the cost-effectiveness of the additional funds caused by your recommendations. For example, you could determine the marginal cost-effectiveness of each organisation adding the contributions of their programs, determining each contribution multiplying:
The fraction of additional funds (which would be caused by your recommendation) going to the program i. You could ask the organisation about this.
The cost-effectiveness of additional funds going to the program as a fraction of its past cost-effectiveness. You currently consider this qualitatively.
The past cost-effectiveness of the program. You currently consider this quantitatively sometimes via backward-looking CEAs.
We do not recommend charities if there is a large enough gap between their expected marginal cost-effectiveness and that of our other charities, and we do use the framing that you suggest when considering adding the next marginal charity.
Great!
However, since we are unable to always fully quantify the impact on animals of charities’ work, this is partially based on qualitative arguments and judgments, so our decisions may not always appear consistent with the results of our CEAs.
Thanks for the questions!
As noted in GWWC’s report, our reasoning for recommending ÇHKD is that we think they’re very plausibly competitive with our other recommended charities, such as Sinergia. Sinergia’s CEA rested on more high uncertainty assumptions than ÇHKD’s did, and their CEA covered a smaller percentage of their work. We think it’s reasonable to support both a charity that we are more certain is highly cost-effective (such as ÇHKD) as well as one that we are more uncertain is extremely cost-effective (such as Sinergia). We also think ÇHKD may have more potential to have increased cost-effectiveness in the future, based on their recent shift to focus attention on winning commitments from larger retailers.
There are a few things we’d like to note when it comes to SWP and ALI:
They were evaluated in different years (SWP in 2023 and ALI in 2024) with different methodologies for assessing cost-effectiveness. In 2023, we assessed cost-effectiveness using weighted factor models that consider achievement quantity and quality, whereas in 2024 we switched to back-of-the-envelope calculations of impact per dollar. Because of this, there was no direct comparison between the shrimp stunning programs at SWP and ALI. However, the next time we evaluate SWP we expect to create an impact per dollar estimate, in which case the estimates you’ve created (including differentiating slaughter via ice slurry vs asphyxiation) will come in handy.
ALI’s shrimp work only accounts for ~38% of their overall expenditure, and we had strong reasons to recommend them for their other work (policy outreach, the Aquatic Animal Alliance, etc.).
While ACE values plurality, we don’t take a “best-in-class” approach and wouldn’t rule out recommending multiple charities doing similar work.
Thanks, Vince
Your CEAs suggest the cost-effectiveness of ÇHKD is slightly more uncertain than that of Sinergia, which is in tension with the above. Your upper bound for the cost-effectiveness of:
ÇHKD is 18.1 (= 116⁄6.4) times your lower bound.
Sinergia is 9.45 (= 2.05*10^3/217) times your lower bound.
In addition, your lower bound for the cost-effectiveness of Sinergia is 1.87 (= 217⁄116) times your upper bound for the cost-effectiveness of ÇHKD, which again points towards only Sinergia being recommended.
Thanks, Vince!
I think this can indeed be important. I estimated Sinergia Animal’s meal replacement program in 2023 was 0.107 % as cost-effective as their cage-free campaigns. So I would say that x % of their marginal funding going towards their meal replacement program would decrease their marginal cost-effectiveness by around x %. I think your CEAs should ideally refer to the expected additional funding caused by ACE’s recommendations, not a fraction of the organisations’ past work. GWWC’s evaluation argued for this too if I recall correctly.
Even if the organisation whose cost-effectiveness is more certain is way less cost-effective in expectation? If so, I encourage you to disclaim your recommendations are risk averse (as GiveWell does with respect to their Top Charities Fund), and clarify how much.
Would you still recommend many organisations doing similar work if you thought their cost-effectiveness differed significantly? I would drop a recommendation whenever the reduction in impact linked to the recommended organisation receiving less funds was exceeded by the increase in impact linked to other organisations receiving more funds. For example, if you thought recommendation A was 10 % as cost-effective at the margin as recommendation B, and that dropping recommendation A would decrease the funds of A by 100 k$, increase the funds of B by 50 k$, increase the funds of roughly neutral (non-recommended) organisations by 40 k$, increase donations to your movement grants’ fund by 10 k$, and believed this fund was 2 times as cost-effective at the marfin as recommendation B, dropping recommendation A would be as good as directing 60 k$ (= (-100*0.1 + 50 + 40*0 + 10*2)*10^3) to B. In this case, it would be worth dropping recommendation A. Have you considered reasoning along these lines to decide on whether to make a recommendation or not? I understand there is lots of uncertainty about comparisons between the marginal cost-effectiveness of organisations, and how dropping or adding a recommendation would influence the funding of your recommendations. However, you are already making judgements about these implicitly. I think being explicit about your assumptions would help you clarify them, and improve them in the future, thus eventually leading to better decisions.
Hi Vasco,
We agree that the majority of our analysis should focus on the future work that would be enabled by ACE’s recommendation. However, forward-looking CEAs are inherently more subjective because they rely on projected metrics rather than actual past results. For this reason, we tend to create backward-looking CEAs and then assess whether there are any reasons to expect diminishing returns in the next two years (the duration of an ACE recommendation). When GWWC shared with us anonymized comments from the experts they consulted on this topic, the comments acknowledged these limitations of forward-looking CEAs. However, we also think there are cases where forward-looking CEAs can be helpful despite these limitations, for example when charities are planning new programs that are not currently funded.
We do not recommend charities if there is a large enough gap between their expected marginal cost-effectiveness and that of our other charities, and we do use the framing that you suggest when considering adding the next marginal charity. However, since we are unable to always fully quantify the impact on animals of charities’ work, this is partially based on qualitative arguments and judgments, so our decisions may not always appear consistent with the results of our CEAs.
In general, we quantify uncertainty within our CEA assessments and we also qualitatively assess the risk of each program. Additionally, we screen out applicants whose work is “too” uncertain based on their track record and whether or not the charities themselves are uncertain about where future funding would go. Our Movement Grants program does not have these bars and is willing to fund newer and more exploratory programs. However, we do agree that it’d be worthwhile to be clearer about how we weigh different types of risk in our decision-making, and we’ll consider adding this to our communications.
Thanks, Vince
Your lower bound for the cost-effectiveness of Sinergia is 1.87 (= 217/116) times your upper bound for the cost-effectiveness of ÇHKD, which again points towards only Sinergia being recommended.
Thanks for the additional clarifications, Vince!
Makes sense. I very much agree the CEAs of past work are valuable. However, I suspect it would be good to be more quantitative/explicit about how that is used to inform your views about the cost-effectiveness of the additional funds caused by your recommendations. For example, you could determine the marginal cost-effectiveness of each organisation adding the contributions of their programs, determining each contribution multiplying:
The fraction of additional funds (which would be caused by your recommendation) going to the program i. You could ask the organisation about this.
The cost-effectiveness of additional funds going to the program as a fraction of its past cost-effectiveness. You currently consider this qualitatively.
The past cost-effectiveness of the program. You currently consider this quantitatively sometimes via backward-looking CEAs.
Great!
Have you described such judgements somewhere?