So by default, GFI, Sinergia, Fish Welfare Initiative, Kafessiz and DVF were all excluded from potentially being identified (which seems illogical, as there is no obvious reason to think that charities evaluated in 2022 would be less cost-effective)
Yes they were, as were any other charities than the three charities we asked ACE to send us more information on (based on where they thought they could make the strongest case by our lights). Among those, we think ACE provided the strongest case for THLâs corporate campaigns, and with the additional referral from Open Phil + the existing public reports by FP and RP on corporate campaigns, we think this is enough to justify a recommendation. This is what I meant by there indeed being a measurability bias in our recommendation (which we think is a bullet worth biting here!): we ended up recommending THL in large part because there was sufficient evidence of cost-effectiveness readily and publicly available. We donât have the same evidence for any of these other charities, so they could in principle be as or even more cost-effective than THL (but also much less!), and without the evidence to support their case we donât (yet) feel justified recommending them. We donât have capacity to directly evaluate individual charities ourselves (including THL!), but continue to host many promising charities on our donation platform, so donors who have time to look into them further can choose to support them.
To put this differently, the choice for us wasnât between âevaluating all of ACEâs recommendationsâ and âevaluating only THL /â three charitiesâ (as we didnât have capacity to do any individual charity evaluations). The choice for us was between âonly recommending the AWFâ and ârecommending both the AWF and THLâs corporate campaignsâ because there happened to already be sufficiently strong evidence/âevaluations available for THLâs corporate campaigns. For reasons explained earlier, we stand by our decision to prefer the latter over the former, even though that means that many other promising charities donât have a chance to be recommended at this point (but note that this is the case in charity evaluation across cause areas!).
Given you only looked at three of the ACE 2023 recommendations (and you didnât say which ones), Iâm wondering how you can make such a strong claim for all of ACEâs recommended charities?
Could you clarify which âstrong claim for all of ACEâs recommended charitiesâ you are referring to? From the executive summary of our report on ACE:
We also expect the gain in impact from giving to any ACE-recommended charity over giving to a random animal welfare charity is much larger than any potential further gain from giving to the AWF or THLâs corporate campaigns over any (other) ACE-recommended charity, and note that we havenât evaluated ACEâs recommended charities individually, but only ACEâs evaluation process.
On a slightly unrelated point: For the referral from OP, I would be curious to hear if you asked them âWhat is the most cost-effective marginal giving opportunity for farmed animal welfareâ (to which they replied THLâs corporate campaigns) or something closer to âDo you think THL is a cost-effective giving opportunity on the margin?â
The latter, because a referral by OP on its own wouldnât have been sufficient for us to make a recommendation (as we havenât evaluated OP): for recommending THLâs corporate campaigns, we really relied on these four separate pieces of evidence being available.
I should have said âOne of the top 2 marginal giving opportunitiesâ but I still think I stand by my point that many experienced animal advocates would disagree with this claim, and itâs not clear that your charity recommendation work has sufficient depth to challenge that (e.g. you didnât evaluate groups yourself), in which case itâs not clear why folks should defer to you over subject-matter experts (e.g. AWF, OP or ACE).
Weâre not even claiming it is one of the top 2 marginal giving opportunities, just that it is the best recommendation we can make to donors based on the information available to us from evaluators. If you could point us to any alternative well-justified recommendations/âevaluators for us to evaluate, weâd be all ears.
And we donât claim people should defer to us directly on charity evaluations (again, we donât currently do these ourselves!). Ultimately, our recommendations (including THL!) are based on the recommendations of the subject-matter experts you reference. The purpose of our evaluations and reports is to help donors make better decisions based on the recommendations and information these experts provide.
Yes they were, as were any other charities than the three charities we asked ACE to send us more information on (based on where they thought they could make the strongest case by our lights). Among those, we think ACE provided the strongest case for THLâs corporate campaigns, and with the additional referral from Open Phil + the existing public reports by FP and RP on corporate campaigns, we think this is enough to justify a recommendation. This is what I meant by there indeed being a measurability bias in our recommendation (which we think is a bullet worth biting here!): we ended up recommending THL in large part because there was sufficient evidence of cost-effectiveness readily and publicly available. We donât have the same evidence for any of these other charities, so they could in principle be as or even more cost-effective than THL (but also much less!), and without the evidence to support their case we donât (yet) feel justified recommending them. We donât have capacity to directly evaluate individual charities ourselves (including THL!), but continue to host many promising charities on our donation platform, so donors who have time to look into them further can choose to support them.
To put this differently, the choice for us wasnât between âevaluating all of ACEâs recommendationsâ and âevaluating only THL /â three charitiesâ (as we didnât have capacity to do any individual charity evaluations). The choice for us was between âonly recommending the AWFâ and ârecommending both the AWF and THLâs corporate campaignsâ because there happened to already be sufficiently strong evidence/âevaluations available for THLâs corporate campaigns. For reasons explained earlier, we stand by our decision to prefer the latter over the former, even though that means that many other promising charities donât have a chance to be recommended at this point (but note that this is the case in charity evaluation across cause areas!).
Could you clarify which âstrong claim for all of ACEâs recommended charitiesâ you are referring to? From the executive summary of our report on ACE:
We also expect the gain in impact from giving to any ACE-recommended charity over giving to a random animal welfare charity is much larger than any potential further gain from giving to the AWF or THLâs corporate campaigns over any (other) ACE-recommended charity, and note that we havenât evaluated ACEâs recommended charities individually, but only ACEâs evaluation process.
The latter, because a referral by OP on its own wouldnât have been sufficient for us to make a recommendation (as we havenât evaluated OP): for recommending THLâs corporate campaigns, we really relied on these four separate pieces of evidence being available.
Weâre not even claiming it is one of the top 2 marginal giving opportunities, just that it is the best recommendation we can make to donors based on the information available to us from evaluators. If you could point us to any alternative well-justified recommendations/âevaluators for us to evaluate, weâd be all ears.
And we donât claim people should defer to us directly on charity evaluations (again, we donât currently do these ourselves!). Ultimately, our recommendations (including THL!) are based on the recommendations of the subject-matter experts you reference. The purpose of our evaluations and reports is to help donors make better decisions based on the recommendations and information these experts provide.