What do you do when direct utilitarian computations give unintuitive results, for example if direct utilitarian math said that you should give 80% of the fund to just shrimp welfare? Is your methodology basically just to rank grant opportunities by utilitarian effectiveness (using best guesses) or do you have minimum percentages per species, or have allocation percentages for measurable interventions vs new/experimental interventions, or other ways of allocating?
I love the EA Animal Welfare Fund, thanks for your work!
Thank you for this thoughtful question and for your kind words about the Animal Welfare Fund! You raise an important point. Let me break down our approach:
First, we don’t operate with fixed portfolio allocations or minimum percentages per species. Instead, we aim to maximize the marginal impact of our grants based on our best current understanding. This means evaluating each opportunity on its own merits and seeing if it is above our bar. More about our bar here.
Secondly, it is worth noting that purely theoretical calculations often differ significantly from practical funding opportunities. While back-of-the-envelope calculations might suggest allocating a large percentage to certain species (like shrimps), we simply don’t see enough promising, implementation-ready opportunities in those areas to make such allocations feasible. Historically, we were more limited to the applications we received, but recently we started doing more active grantmaking to generate those opportunities in areas that are cost-effective but neglected, and in 2025 we plan to further invest in it.
Even still, if those opportunities existed, I think it would be unwise to make decisions purely based on those naive utilitarian calculations. I say naive, referring to the difference between an actual cost-effectiveness and estimated cost-effectiveness. If I knew the actual cost-effectiveness of given interventions, that accounts for all uncertainties:
Empirical—e.g., how many shrimps are we actually affecting?
Moral—e.g., how to trade off different intensities of pain and what moral weight should be assigned to different species?
Epistemological—e.g., how to account for far-future effects vs near-term effects of animal welfare interventions, given complex cluelessness?
that gives me a true number for cost-effectiveness, a “god comes from the sky” kind of situation, then I would rely on it. However, any estimate of cost-effectiveness is going to be a naive one and merely a very uncertain estimate that may miss those important uncertainties. Additionally, I would refer here to the timeless classic “Why we can’t take expected value estimates literally (even when they’re unbiased)” by GiveWell. While AWF’s approach is different in some places than the one outlined in this GW blog post, I think the main point stands. They conclude that: “I feel that any giving approach that relies only on estimated expected-value – and does not incorporate preferences for better-grounded estimates over shakier estimates – is flawed. Thus, when aiming to maximize expected positive impact, it is not advisable to make giving decisions based fully on explicit formulas. Proper Bayesian adjustments are important and are usually overly difficult to formalize.” In light of that all, I think that we have imperfect information and too much fundamental uncertainty to justify extremely undiversified allocation, even if explicitly utilitarian calculation would point to that.
Additionally, in practice, our fund managers bring diverse perspectives on for example how to weigh speculative versus evidence-backed approaches. This natural diversity helps ensure we maintain a balanced portfolio between proven interventions and those with high expected value but less certain ones.
Currently, we’re working on refining our strategic framework, which may introduce additional allocation considerations. That said, our focus on neglected species and interventions already creates an implicit prioritization—we rarely fund work focused on cattle welfare, for instance, as other funders adequately cover this space.
For each grant we consider, we assess whether it meets our cost-effectiveness bar, which is influenced by other opportunities we see in our pipeline. This approach allows us to remain flexible and responsive to the most promising opportunities while maintaining high standards for expected impact.
GiveWell still relies a lots on the their explicit cost-effectiveness numbers. Elie Hassenfeld, their co-founder and CEO, mentioned on the Clearer Thinking podcast that:
GiveWell cost- effectiveness estimates are not the only input into our decisions to fund malaria programs and deworming programs, there are some other factors, but they’re certainly 80% plus of the case.
The numerical cost-effectiveness estimate in the spreadsheet is nearly always the most important factor in our recommendations, but not the only factor. That is, we don’t solely rely on our spreadsheet-based analysis of cost-effectiveness when making grants.
We don’t have an institutional position on exactly how much of the decision comes down to the spreadsheet analysis (though Elie’s take of “80% plus” definitely seems reasonable!) and it varies by grant, but many of the factors we consider outside our models (e.g. qualitative factors about an organization) are in the service of making impact-oriented decisions. See this post for more discussion.
For a small number of grants, the case for the grant relies heavily on factors other than expected impact of that grant per se. For example, we sometimes make exit grants in order to be a responsible funder and treat partner organizations considerately even if we think funding could be used more cost-effectively elsewhere.
“I feel that any giving approach that relies only on estimated expected-value – and does not incorporate preferences for better-grounded estimates over shakier estimates – is flawed. Thus, when aiming to maximize expected positive impact, it is not advisable to make giving decisions based fully on explicit formulas. Proper Bayesian adjustments are important and are usually overly difficult to formalize.”
One can estimate the expected value using sceptical priors to weight uncertain estimates less heavily, as with inverse-variance weighting. I think it is good to be explicit, so I suppose the question is whether it is cost-effective, i.e. worth it to invest time to formalise the Bayesian adjustment.
Great question, Peter! Relatedly, have you (AWF’s team) considered disclosing which organisations you are not funding because of funding diversity concerns (as opposed to marginal cost-effectiveness without accounting for funding diversity below your bar)?
What do you do when direct utilitarian computations give unintuitive results, for example if direct utilitarian math said that you should give 80% of the fund to just shrimp welfare? Is your methodology basically just to rank grant opportunities by utilitarian effectiveness (using best guesses) or do you have minimum percentages per species, or have allocation percentages for measurable interventions vs new/experimental interventions, or other ways of allocating?
I love the EA Animal Welfare Fund, thanks for your work!
Thank you for this thoughtful question and for your kind words about the Animal Welfare Fund! You raise an important point. Let me break down our approach:
First, we don’t operate with fixed portfolio allocations or minimum percentages per species. Instead, we aim to maximize the marginal impact of our grants based on our best current understanding. This means evaluating each opportunity on its own merits and seeing if it is above our bar. More about our bar here.
Secondly, it is worth noting that purely theoretical calculations often differ significantly from practical funding opportunities. While back-of-the-envelope calculations might suggest allocating a large percentage to certain species (like shrimps), we simply don’t see enough promising, implementation-ready opportunities in those areas to make such allocations feasible. Historically, we were more limited to the applications we received, but recently we started doing more active grantmaking to generate those opportunities in areas that are cost-effective but neglected, and in 2025 we plan to further invest in it.
Even still, if those opportunities existed, I think it would be unwise to make decisions purely based on those naive utilitarian calculations. I say naive, referring to the difference between an actual cost-effectiveness and estimated cost-effectiveness. If I knew the actual cost-effectiveness of given interventions, that accounts for all uncertainties:
Empirical—e.g., how many shrimps are we actually affecting?
Moral—e.g., how to trade off different intensities of pain and what moral weight should be assigned to different species?
Epistemological—e.g., how to account for far-future effects vs near-term effects of animal welfare interventions, given complex cluelessness?
that gives me a true number for cost-effectiveness, a “god comes from the sky” kind of situation, then I would rely on it. However, any estimate of cost-effectiveness is going to be a naive one and merely a very uncertain estimate that may miss those important uncertainties. Additionally, I would refer here to the timeless classic “Why we can’t take expected value estimates literally (even when they’re unbiased)” by GiveWell. While AWF’s approach is different in some places than the one outlined in this GW blog post, I think the main point stands. They conclude that:
“I feel that any giving approach that relies only on estimated expected-value – and does not incorporate preferences for better-grounded estimates over shakier estimates – is flawed. Thus, when aiming to maximize expected positive impact, it is not advisable to make giving decisions based fully on explicit formulas. Proper Bayesian adjustments are important and are usually overly difficult to formalize.”
In light of that all, I think that we have imperfect information and too much fundamental uncertainty to justify extremely undiversified allocation, even if explicitly utilitarian calculation would point to that.
Additionally, in practice, our fund managers bring diverse perspectives on for example how to weigh speculative versus evidence-backed approaches. This natural diversity helps ensure we maintain a balanced portfolio between proven interventions and those with high expected value but less certain ones.
Currently, we’re working on refining our strategic framework, which may introduce additional allocation considerations. That said, our focus on neglected species and interventions already creates an implicit prioritization—we rarely fund work focused on cattle welfare, for instance, as other funders adequately cover this space.
For each grant we consider, we assess whether it meets our cost-effectiveness bar, which is influenced by other opportunities we see in our pipeline. This approach allows us to remain flexible and responsive to the most promising opportunities while maintaining high standards for expected impact.
Thanks for the answer, Karolina!
GiveWell still relies a lots on the their explicit cost-effectiveness numbers. Elie Hassenfeld, their co-founder and CEO, mentioned on the Clearer Thinking podcast that:
GiveWell also commented:
One can estimate the expected value using sceptical priors to weight uncertain estimates less heavily, as with inverse-variance weighting. I think it is good to be explicit, so I suppose the question is whether it is cost-effective, i.e. worth it to invest time to formalise the Bayesian adjustment.
Great question, Peter! Relatedly, have you (AWF’s team) considered disclosing which organisations you are not funding because of funding diversity concerns (as opposed to marginal cost-effectiveness without accounting for funding diversity below your bar)?