How do you think the expected marginal cost-effectiveness of the grantees compares to the large effective animal advocacy charities like The Humane League?
Also, how do you judge their expected marginal cost-effectiveness? Do you do back-of-the-envelope calculations? Compare to previous projects with estimates? Check the project team’s own estimates (and make adjustments as necessary)? All of the above? Any others?
Good questions, and appreciate you raising them. I am going to split the responses because they’re somewhat long.
>How do you think the expected marginal cost-effectiveness of the grantees compares to the large effective animal advocacy charities like The Humane League?
Tl;dr: Main things I think about are i) the generally lacking evidence base leaves it unresolved, ii) risk and variance across the respective portfolios, iii) “big-picture” takes about the different portfolios, and iv) dynamics at the community level, as well as, what the community level portfolio should be. Spoiler: for those really interested in an explicit estimate, I don’t give one but would be happy to connect if you would like to discuss it!
I would be pretty curious to hear your perspective on this (or that of others) :)
For those interested in delving deeper, it could be worth reaching out to a few sources regarding this. For instance, I think people from THL probably have some good thoughts, and I would be happy to introduce anyone who might be interested. Also, flagging that I really could be biased here, as I am chair of the EA AWF and so probably have some interest in claiming greater effectiveness of grantees! I think there could also be some variance in the opinions of different fund managers, and I am just reporting some of my thoughts here.
Part of how I think about it is, the relatively lacking evidence base we have definitely contributes to it being difficult and I think leaves it all fairly unresolved. To help put the evidence base size in perspective, the total size of our animal sectors (FAW and WAW) is well south of 10% of the annual public global health r & d. Perhaps 5-10% of our sector’s resources (c. ~$200M/yr) could be categorized as research and development right now. So each year, we are looking at an evidence base, that measured by $ size, seems to grow at < 1% of the evidence base for global health. Furthermore, global health has been around much longer, so plausibly the difference in sizes of the respective evidence bases could be on the order of a thousand times. (Sidenote: I am glad to see groups like RP working towards making the evidence base from which we operate better)
Another part of how I think about it is that right now at least, there seems to be clearly greater levels of risk and variance in the ROI associated with the AWF grantees compared to THL. In fact, I’d say perhaps the main distinguishing factor between the two options’ marginal cost-effectiveness is there appears to be much greater risk and variance in good/marginal $ associated with the AWF grantees. A big part of that is, relative to THL’s programmatic portfolio, the AWF grantees’ programmatic portfolio seems much riskier, and embraces some areas that are less proven, or have relatively long pathways to impact such as research, farmed fish, wild animals, invertebrates, or early-stage seed funding. The geographic portfolio of AWF grantees in sum also seems somewhat more risk-tolerant to me too (e.g., slightly more of a % focus on parts of the globe where there’s little or no organized animal advocacy).
I think then combining the above two points we all then quickly end up in this position where it is i) to an extent importantly unresolved due to lacking evidence, and ii) it seems like a main distinguishing factor in the marginal cost-effectiveness estimate could be the variance and risk in AWF grantees. I think probably given i) and ii), different reasonable people can have different reasonable-sounding takes here with regards to which is more effective on the margin. Probably a lot of it will come down to some “big-picture takes” on the promisingness of some quite different approaches. E.g., degree of sentience across different species, priors on different approaches, the weight to give different evidence, and the value/risk of early-stage funding for promising areas/groups/locations.
Without revealing too much, one thing I would say is that I have personally come to feel more risk-tolerant over the years. However, I am still pretty hesitant to give a direct estimate or strongly indicate my preferences because some interested parties might skip straight to that, regardless of how many caveats I put on it or nuance I add. Honestly, I also have some sense that doing so publicly may also result in losing credibility in the eyes of some important stakeholders too. That said, I would be more than happy to personally chat and connect with anyone who is thinking through this question!
Relatedly, another layer to it all is, as a community that is looking to most help animals, to what extent does it make sense for representatives to publicly weigh in on how promising “their” option is specifically relative to some other competitive option. Another part of that is, perhaps what is quite important is what is above the bar for funding from the community, what ought the community level portfolio look like, and how would additional donations to various options most bring us in line with the optimal portfolio. Within that community portfolio lens, I think that both options (EA AWF and THL) really firmly land above the bar for funding. Another thing I’d say is that I think there can be an underrated degree of fungibility within EA-aligned funding within the animal sector. That is, some EA-aligned donor/funder A deciding to give less or not giving at all to one promising option, often importantly resulting in some EA-aligned donor/funder B giving more to that option.
> Also, how do you judge their expected marginal cost-effectiveness? Do you do back-of-the-envelope calculations? Compare to previous projects with estimates? Check the project team’s own estimates (and make adjustments as necessary)? All of the above? Any others?
It varies by project and depends on who the grant investigator is.
If a) the project is relatively well-suited to a back of the envelope and b) a back of the envelope seems decision-relevant, then we will engage in one. Right now, a) and b) seem true in a minority of cases, maybe ~10%-25% of applications depending on the round to give some rough sense. However, note that there tends to be some difference between projects in areas or by groups we have already evaluated vs projects/groups/areas that are newer to us. I’d say newer projects/groups areas are more likely to receive a back of the envelope style estimate. In cases where we do them, we generally look to compare to one’s we have previously done. If the project team submits an estimate (which tends to be relatively rare, again perhaps in that 10-25% range and they can be of varying quality), a fund manager will certainly review and note thoughts during the grant investigation.
More generally, here are some of the main general things that I’d say we like to look at to judge marginal cost-effectiveness (though note again the extent really depends on the fund manger and the specifics of the application):
Do they seem to be operating in an area that seems high-impact?
Things to look at include:
Is it work regarding a large scale and neglected animal population?
Or work in a neglected but large-scale geography?
Or does this seem like a promising addition to the philanthropic alt-protein ecosystem?
Or does this intervention have a relatively promising track record? E.g., corporatecampaigns.
Also, do their plans in that area seem reasonable?
Things to look at include:
Do their plans seem detailed and concrete, and exhibit a relatively deep understanding of the relevant issues?
How well do they respond when some alternative approach is suggested?
Combining 1 and 2, when applicable, does some quantitative back-of-the-envelope calculation suggest they impact a high number of animals per dollar spent? Metrics include:
How many animal lives improved per dollar in expectation?
Or how many farmed animal lives averted per dollar spent in expectation?
Or perhaps how many $’s influenced per $ donated?
Are we aware of any going ons regarding that group that should give us pause?
Things to look at include:
What level of staff retention have they had recently?
Has someone reached out to report some infraction that (reportedly) hasn’t been dealt with properly by the group?
Are there credible reports of concerns about how the group interacts with other groups in the movement?
What is that group’s current financial position?
Things to look at include:
Relative to their annual budget, how much funding do they have in reserve?
What amount of funding are they expecting to raise from other sources?
The investigator produces a brief write-up summarizing their overall thinking, and assigns a vote to the application.
How do you think the expected marginal cost-effectiveness of the grantees compares to the large effective animal advocacy charities like The Humane League?
Also, how do you judge their expected marginal cost-effectiveness? Do you do back-of-the-envelope calculations? Compare to previous projects with estimates? Check the project team’s own estimates (and make adjustments as necessary)? All of the above? Any others?
Hi Michael,
Good questions, and appreciate you raising them. I am going to split the responses because they’re somewhat long.
>How do you think the expected marginal cost-effectiveness of the grantees compares to the large effective animal advocacy charities like The Humane League?
Tl;dr: Main things I think about are i) the generally lacking evidence base leaves it unresolved, ii) risk and variance across the respective portfolios, iii) “big-picture” takes about the different portfolios, and iv) dynamics at the community level, as well as, what the community level portfolio should be. Spoiler: for those really interested in an explicit estimate, I don’t give one but would be happy to connect if you would like to discuss it!
I would be pretty curious to hear your perspective on this (or that of others) :)
For those interested in delving deeper, it could be worth reaching out to a few sources regarding this. For instance, I think people from THL probably have some good thoughts, and I would be happy to introduce anyone who might be interested. Also, flagging that I really could be biased here, as I am chair of the EA AWF and so probably have some interest in claiming greater effectiveness of grantees! I think there could also be some variance in the opinions of different fund managers, and I am just reporting some of my thoughts here.
Part of how I think about it is, the relatively lacking evidence base we have definitely contributes to it being difficult and I think leaves it all fairly unresolved. To help put the evidence base size in perspective, the total size of our animal sectors (FAW and WAW) is well south of 10% of the annual public global health r & d. Perhaps 5-10% of our sector’s resources (c. ~$200M/yr) could be categorized as research and development right now. So each year, we are looking at an evidence base, that measured by $ size, seems to grow at < 1% of the evidence base for global health. Furthermore, global health has been around much longer, so plausibly the difference in sizes of the respective evidence bases could be on the order of a thousand times. (Sidenote: I am glad to see groups like RP working towards making the evidence base from which we operate better)
Another part of how I think about it is that right now at least, there seems to be clearly greater levels of risk and variance in the ROI associated with the AWF grantees compared to THL. In fact, I’d say perhaps the main distinguishing factor between the two options’ marginal cost-effectiveness is there appears to be much greater risk and variance in good/marginal $ associated with the AWF grantees. A big part of that is, relative to THL’s programmatic portfolio, the AWF grantees’ programmatic portfolio seems much riskier, and embraces some areas that are less proven, or have relatively long pathways to impact such as research, farmed fish, wild animals, invertebrates, or early-stage seed funding. The geographic portfolio of AWF grantees in sum also seems somewhat more risk-tolerant to me too (e.g., slightly more of a % focus on parts of the globe where there’s little or no organized animal advocacy).
I think then combining the above two points we all then quickly end up in this position where it is i) to an extent importantly unresolved due to lacking evidence, and ii) it seems like a main distinguishing factor in the marginal cost-effectiveness estimate could be the variance and risk in AWF grantees. I think probably given i) and ii), different reasonable people can have different reasonable-sounding takes here with regards to which is more effective on the margin. Probably a lot of it will come down to some “big-picture takes” on the promisingness of some quite different approaches. E.g., degree of sentience across different species, priors on different approaches, the weight to give different evidence, and the value/risk of early-stage funding for promising areas/groups/locations.
Without revealing too much, one thing I would say is that I have personally come to feel more risk-tolerant over the years. However, I am still pretty hesitant to give a direct estimate or strongly indicate my preferences because some interested parties might skip straight to that, regardless of how many caveats I put on it or nuance I add. Honestly, I also have some sense that doing so publicly may also result in losing credibility in the eyes of some important stakeholders too. That said, I would be more than happy to personally chat and connect with anyone who is thinking through this question!
Relatedly, another layer to it all is, as a community that is looking to most help animals, to what extent does it make sense for representatives to publicly weigh in on how promising “their” option is specifically relative to some other competitive option. Another part of that is, perhaps what is quite important is what is above the bar for funding from the community, what ought the community level portfolio look like, and how would additional donations to various options most bring us in line with the optimal portfolio. Within that community portfolio lens, I think that both options (EA AWF and THL) really firmly land above the bar for funding. Another thing I’d say is that I think there can be an underrated degree of fungibility within EA-aligned funding within the animal sector. That is, some EA-aligned donor/funder A deciding to give less or not giving at all to one promising option, often importantly resulting in some EA-aligned donor/funder B giving more to that option.
Hopefully, that’s all helpful! :)
> Also, how do you judge their expected marginal cost-effectiveness? Do you do back-of-the-envelope calculations? Compare to previous projects with estimates? Check the project team’s own estimates (and make adjustments as necessary)? All of the above? Any others?
It varies by project and depends on who the grant investigator is.
If a) the project is relatively well-suited to a back of the envelope and b) a back of the envelope seems decision-relevant, then we will engage in one. Right now, a) and b) seem true in a minority of cases, maybe ~10%-25% of applications depending on the round to give some rough sense. However, note that there tends to be some difference between projects in areas or by groups we have already evaluated vs projects/groups/areas that are newer to us. I’d say newer projects/groups areas are more likely to receive a back of the envelope style estimate. In cases where we do them, we generally look to compare to one’s we have previously done. If the project team submits an estimate (which tends to be relatively rare, again perhaps in that 10-25% range and they can be of varying quality), a fund manager will certainly review and note thoughts during the grant investigation.
More generally, here are some of the main general things that I’d say we like to look at to judge marginal cost-effectiveness (though note again the extent really depends on the fund manger and the specifics of the application):
Do they seem to be operating in an area that seems high-impact?
Things to look at include:
Is it work regarding a large scale and neglected animal population?
Or work in a neglected but large-scale geography?
Or does this seem like a promising addition to the philanthropic alt-protein ecosystem?
Or does this intervention have a relatively promising track record? E.g., corporate campaigns.
Also, do their plans in that area seem reasonable?
Things to look at include:
Do their plans seem detailed and concrete, and exhibit a relatively deep understanding of the relevant issues?
How well do they respond when some alternative approach is suggested?
Combining 1 and 2, when applicable, does some quantitative back-of-the-envelope calculation suggest they impact a high number of animals per dollar spent? Metrics include:
How many animal lives improved per dollar in expectation?
Or how many farmed animal lives averted per dollar spent in expectation?
Or perhaps how many $’s influenced per $ donated?
Are we aware of any going ons regarding that group that should give us pause?
Things to look at include:
What level of staff retention have they had recently?
Has someone reached out to report some infraction that (reportedly) hasn’t been dealt with properly by the group?
Are there credible reports of concerns about how the group interacts with other groups in the movement?
What is that group’s current financial position?
Things to look at include:
Relative to their annual budget, how much funding do they have in reserve?
What amount of funding are they expecting to raise from other sources?
The investigator produces a brief write-up summarizing their overall thinking, and assigns a vote to the application.