Strategy Fellow — cFactual
Principal — Good Structures
I previously co-founded and served as Executive Director at Wild Animal Initiative, and served as the COO of Rethink Priorities from 2020 to 2024.
Strategy Fellow — cFactual
Principal — Good Structures
I previously co-founded and served as Executive Director at Wild Animal Initiative, and served as the COO of Rethink Priorities from 2020 to 2024.
Thanks!
This is neat! It seems like there is a possibility that the cost-effectiveness is even higher, especially on the realization of actions to help animals side, since EAAL is providing a community for people to stay engaged through, which might decrease volunteer recidivism in the long run.
Thanks for this—we definitely agree that there needs to be more work in the field. However, I think it’s unlikely that we are best positioned to do that work. This is the reason that academic field building is such a major part of our focus. Both WASR and UF tried this for the last year (UF has a write up on this here—https://www.utility.farm/words/academic-outreach. Neither had much success with this, both through offering funding for research directly and trying to shift values.
We are taking a new approach to this now, working with early career academics who are less likely to have their reputation staked on certain approaches, etc. Hopefully, this will be a lot more fruitful for generating novel and relevant lab and field research. Also, this research would likely be funded outside EA / animal advocacy, which adds additional value. Lab research is significantly more expensive than literature reviews / things within our capabilities. Since we don’t have a strong sense of what the most important questions are to answer first, shifting those costs to external organizations reduces risk in some ways for us, while allowing us to still help shape the direction of the research to some extent.
To clarify one thing—when we refer to academic outreach, we mean outreach to academics in the hard sciences, specifically working on building welfare biology as an academic field. UF and WASR both had at least one staff dedicated to this throughout the last year. UF has a writeup on their efforts here—https://www.utility.farm/words/academic-outreach, and WASR’s approach included doing a request for grant proposals etc. for academics.
I don’t think there will be significant overlap—we are trying a new approach targeting early career academics, and offering them funding to work on the outreach themselves instead of us. From what I understand of AE’s program this is pretty different. We are also primarily operating in the US, while AE has less of a presence here, and seems to me to have generally worked with European academics. Regardless, we plan on working to coordinate with them to the greatest extent possible to limit overlap.
This is cool! There are definitely limiting factors on working on an issue, but that doesn’t mean that you shouldn’t focus on that cause, but that part of the cost-effectiveness calculation will be how much it costs to raise those limits. In the 1970′s and 80′s, the talent pool for working on farmed animal advocacy, for example, was much smaller. But if we hadn’t worked on it, and built up a better talent pool, brought in more donors, etc., we’d still be in that position today, and wouldn’t have the capacity we have now. The scale of a problem is important because it is something true independent of the state of the movement. Limiting factors are not that—Wild Animal Initiative (where I work), for example, is pursuing academic outreach on wild animal welfare because it will help us in the long run to address these limits, by growing the talent pool etc. AI alignment research probably had a talent pool of basically 0 only a few years ago. Does that mean that no one should have started working on it at that point?
Regardless, you can just update your cost-effectiveness estimates but factoring in the costs to raise these limits.
E.g. it currently costs X dollars to help Y wild animals, up to 1000*Y, at which point some limiting factor stops us from helping more wild animals.
We can increase that limit at a cost of Z per Y more animals (perhaps through advocacy to bring in new talent or donors, or to improve the logistical limit).
The real cost-effectiveness is not X/Y dollars per animal, it is:
X/Y, Y<=1000
(X+Z)/Y, Y>1000
Given this, it is still possible that working on wild animals is really cost-effective. Look at the sheer number of invertebrates negatively impacted by insecticides, for example. If we can develop a tractable intervention in ~2 years to help them, it is possible that in that time, we can spend a little more to improve some of these limits as well, and over the whole period, have a really cost-effective intervention overall.
Similarly, in your surgery example, when you hit a limit on surgeries you can provide due to the number of surgeons, you can pay more to train more surgeons (or address the limit however you’re able too). Obviously this lowers the cost-effectiveness, but for many interventions, it still might be a good option at the higher cost.
I guess my thought is, the problem scale definitely is super important. Limiting factors matter because they change the cost-effectiveness. But since they are mutable, they shouldn’t be viewed as hard barriers to working on an issue. And it seems that regardless of what the limits are, or what the scale is, what we actually should be looking at is the cost-effectiveness of improving the issue, and given the above, limiting factors are a consideration within that, as is scale (given that scale is a limit on a certain cost-effectiveness being applicable—e.g. including costs to increase limits, say in the future we can help farmed animals at $Z per animal and wild animals at $Z/3 per animal, so we might want to help wild animals until we run out of wild animals to help, and then focus on the next best thing, which might be farmed animals (obviously an oversimplified example)).
That makes a lot of sense. Maybe one way of framing scale + cost-effectiveness could be “how long will a particular cost-effectiveness be applicable in the real world?”, and then two ways of describing that cost-effectiveness are either incorporating costs to raise these limits or not.
In either case, I definitely agree that these should be considered. One other thought—it seems like in certain ways, a donation to a charity will account for their efforts to raise limits, to some extent. I don’t know enough about how ACE does cost-effectiveness analysis (and obviously the degree to which this information is incorporated would definitely depend on that), but I could imagine that if you make a statement like “a donation of $100 to The Humane League will help reduce the suffering of X animals”, in a complete assessment of that donation, some of that funding would be going to their development department (raising the amount of funding available), some might be going to volunteer cultivation (maybe volunteer capacity is another limiting factor).
So the issue is more that while the outcome per dollar we are looking at is based on historical performance, over time that outcome per dollar is actually worse because some of that funding was going towards raising limits, and actually would need to be applied to animals not yet helped, if that makes sense.
Either way, I’m really interested in this—since reading it, I’ve been thinking of how I can incorporate this kind of thinking about cost-effectiveness into my organization—it seems tricky, but definitely worth doing a lot more of. Thanks for posting it!
My personal opinion is that it is pretty much impossible to make claims at this point about the sign of many animals’ lives without significantly more research. I think the arguments regarding welfare and life history strategy are compelling prima facie, but that might not be enough evidence for action immediately, and instead indicates it is a high priority area for study (which is why we have so much life history work planned this year). Models like the ones you linked here are interesting and provide some insight, but also have huge assumptions built in that significantly alter the results depending on the author’s views on some critical issue (scoring relative utility of subjective experiences, weighting based on the square root of neurons, and a sentience multiplier), and also don’t account for variations in season, climate etc., that would probably alter those numbers massively as well.
My personal guess is that we are quite a ways off from being able to do this comprehensively (at least a few years) for any particular arthropod population, not including discounts that might be made based on number of neurons or whatever features we think might be important. And we are probably much further out from being able to state with certainty which of those features are important, and how much we should discount on the basis of them (if at all).
Either way, academic buy-in is going to be crucial, which is why we are so focused on academic outreach, and doing research that will help us understand what early academic work we should prioritize.
Thanks for your research! It was interesting to see!
Thanks!
I totally agree—they also often help identify where more research is needed (like seeing which numbers are the hardest to lock down).
Hey Saulius!
This is awesome! I have a few questions:
-Can we make any inferences about what percent of wild-caught fish were originally stocked in specific areas? Or has any research been done (via tagging, genetic markers, species, etc) to try to estimate that? I guess a question would be does reducing the number of stocked fish in commercial fisheries have an impact on the commercial fishing industry that we’d expect to help animals in other ways (e.g. if it made it less commercially viable to do commercial fishing if there were fewer stocked fish it might in turn reduce the fishing of truly wild fish). While the impact of that on wild animals is unclear, it seems like a consideration.
-On the large numbers of juvenile fish that mysteriously don’t seem to be making it to adulthood—is it possible this is a species specific thing? I know in China, there is a dish called 银鱼 (silverfish) that is just dozens or hundreds of fry in a bowl (also called whitebait?). It looks like they are called Icefish in English—https://en.wikipedia.org/wiki/Salangidae—I wonder if the stats are somehow not accounting for fish being eaten at a younger age or stocking specifically for whitebait dishes? Also, it looks like whitebait is eaten a ton of places—https://en.wikipedia.org/wiki/Whitebait
Another possibility is tons of young fish are being used for fishmeal or stocked to feed other fish? They might not make it into stats about fish produced then.
Regardless, thanks for doing all these pieces—they’ve all been really informative and needed for way too long!
That makes sense—thanks for sharing these. I’m honestly surprised the icefish count is so low, but that’s just because it seems popular as a dish and requires a lot of fish. One other theory—is there much information on the fishmeal market? It seems possible that the statistics (I didn’t look too far into methods so this might be wrong) are representing fish sold (or leaving facilities) and that hatcheries are processing fish into fishmeal on site and using it to feed fry and fingerlings? Just a thought about other ways lots of fish might be produced but not represented in counts—especially if the methods for counting are different.
Thanks! That makes sense.
Another issue is if multiple charities are working on the same issue, and cooperating, there might be times when a particular charity actively chooses to take less cost-effective actions in order to improve movement wide cost-effectiveness. This happens frequently with the animal welfare corporate campaigns. For example:
Charity A has 100 good volunteers in City A, where Company A is headquartered. To run a campaign against them would cost Charity A $1000, and Company A uses 10M chickens a year. Or, they could run a campaign against Company B in a different city where they have fewer volunteers for $1500.
Charity B has 5 good volunteers in City A, but thinks they could secure a commitment from Company B in City B, where they have more volunteers, for $1000. Company B uses 1M chickens per year. Or, by spending more money, they could secure a commitment from Company A for $1500.
Charities A and B are coordinating, and agree that Companies A and B committing will put pressure on a major target (Company C), and want to figure out how to effectively campaign.
They consider three strategies (note—this isn’t how the cost-effectiveness would work for commitments since they impact chickens for longer than a year, etc, but for simplicity’s sake):
Strategy 1: They both campaign against both targets, at half the cost it would be for them to campaign on their own, and a charity evaluators views the victories as split evenly between them.
Charity A cost-effectiveness: (5M + 0.5M Chickens / $500 + $750) = 4,400 chickens / dollar
Charity B is also 4,400 chickens / dollar.
$2500 total spent across all charities
Strategy 2: Charity A targets Company A, and Charity B targets Company B
Charity A: 10,000 chickens / dollar
Charity B: 1,000 chickens / dollar
$2000 total spent across all charities
Strategy 3: Charity A targets Company B, Charity B targets Company A
Charity A: 667 chickens / dollar
Charity B: 6696 chickens / dollar
$3,000 total spent across all charities
These charities know that a charity evaluator is going to be looking at them, and trying to make a recommendation between the two based on cost-effectiveness. Clearly, the charities should choose Strategy 2, because the least money will be spent overall (and both charities will spend less for the same outcome). But if the charity evaluator is fairly influential, Charity B might push hard for less ideal Strategies 1 or 3, because those make its cost-effectiveness look much better. Strategy 2 is clearly the right choice for Charity B to make, but if they do, an evaluation of their cost-effectiveness will look much worse.
I guess a simple way of putting this is—if multiple charities are working on the same issue, and have different strengths relevant at different times, it seems likely that often they will make decisions that might look bad for their own cost-effectiveness ratings, but were the best thing to do / right decision to make.
Also, on the matching funds note—I personally think it would be better to assume matching funds are truly match rather than not. I’ve fundraised for maybe 5 nonprofits, and out of probably 20+ matching campaigns in that period, maybe 2 were not truly matches. Additionally, often nonprofits will ask major donors to match funds as a way to encourage the major donor to give more (e.g. “you could give $20k like you planned, or you could help us run our 60k year end fundraiser by matching 30k” type of thing). So I’d guess that for most matching campaigns, the fact that it is a matching campaign means there will be some multiplier on your donation, even if it is small. Maybe it is still misleading then? But overall a practice that makes sense for nonprofits to do.
I’ll ask whoever runs the utility.farm site to update my piece cited in this with a note that the cost-effectiveness estimates might be based on bad estimates of cat impact.
Additionally, my cost-effectiveness estimates were only for the US—it is probably most cost-effective to work on cat predation in countries like the UK where a much higher percentage of outdoor cats are owned.
I find the comments about rodents/birds interesting, but mostly irrelevant to the discussion of cat predation, and find framing addressing cat predation and improving rodent welfare as competing aims very strange. I’m going to refer to rodents below, but it could apply to any animals killed by cats. It doesn’t seem obvious that the causal chain we should care about is stopping cat predation causing painful rodent deaths; instead, we should consider both rodenticides causing painful rodent deaths and cat predation causing painful rodent deaths to be important issues.
For there to be a coherent argument to not address cat predation, you would need to demonstrate not only that rodenticides more painful than death via cats, but have a picture of the average rodent’s life after the moment they might have been killed by a cat. Since any rodent who is killed by a cat definitionally would have lived longer had it been killed by a rodenticide instead, the rodent is going to accumulate further positive and negative experiences during its life before being killed. Even if rodenticides are twice as painful, it seems reasonable to expect a prolonged life to often be good, and outweigh that.
Regardless, this doesn’t seem like an argument against addressing cat predation—its an argument that the most effective way to address rodent suffering might be both addressing cat predation and addressing painful rodenticides.
A human analogy might be: we shouldn’t address malaria because someone dying from malaria, if saved, might die from another more painful disease later. I think what follows from that is that we should address malaria and the other painful disease. Not that we should let malaria kill the person / say that it is unclear if malaria is good or bad. Addressing malaria is clearly good, as is reducing cat predation. There might be unintended effects of both that also need to be addressed. But it doesn’t mean that addressing those things has an unclear sign.
My point is, cat predation is clearly bad for rodents. Rodenticides are also clearly bad. We should probably address both things, and be aware of the effect of only addressing one, but not that we shouldn’t address either. Broadly, this seems to apply to wild animal welfare issues in general—the downstream effects are really really complicated, but by doing monitoring during interventions to see unanticipated effects, and addressing those as they come up, we probably make more progress than just pointing out the complexity.
I guess the question that is raised is, how far down should we care about downstream effects from interventions, as opposed to just monitoring interventions and addressing effects as they arise.
“However, perhaps my largest surprise wasn’t an update toward or against a particular type of animal, rather it was based on the extent of conditioned learning behavior that is more or less exhibited by all taxa we considered, including single-celled organisms and animal bodies detached from brain communication, including the lower body of a mouse with a severed spine. While one could take this as weak evidence of widespread sentience, this updated me toward thinking many of these behaviors aren’t very impressive and they were thus largely disregarded in contemplating the positive case for sentience. ”
Marcus, is there any chance you could elaborate on why you leaned one way on this vs the other? I don’t have a clear sense of what I should take away from that, so I’d be curious what your reasoning was.
...
I’d also be interested in all of your thoughts on what exactly a percentage probability of valenced experience (or whatever the morally relevant mind-stuff should be called) is—obviously, they aren’t that close to the fact of whether or not these organisms have valenced experience (which, unless the world is very strange, should be 1 or 0 for all things)
It seems more like they are statements about how you’d make a bet, or something like “confidence in the approach * results from the approach”, or something else about the approach and prioritization. I’m curious how you were defining these probabilities to yourselves, and how definitions would impact their usefulness in cost-effectiveness analyses? i.e. if we were doing a cost-effectiveness estimate, and treating these as confidence * results, I might weight my confidence in this method higher than using my intuitions, but still include other approaches like intuition in my estimate because it theoretically gives me a more accurate model of my current knowledge. But, with a different definition I might just use these numbers.
Thanks for the detailed response. I think I disagree in a sort of principled way with particular kinds of approaches to downstream effects, in part because I think it could just turn into an endless game of trying to figure out how things could turn out poorly, as opposed to a model where we address both rodenticides and cat predation (though I recognize I am stubbornly resisting you all trying to do prioritization, which might not be a good idea given the name of your organization).
Regardless, I’m drafting a new intro section for my cost-effectiveness updates linking to your updated numbers. Thanks again for doing the analysis!
I have a few questions -
Can you elaborate on what difference species makes for the wild bug, wild fish, etc reports? I could imagine that a spider has a pretty different welfare on average than an ant, for example, so it seems hard to know what kind of animal these particular scores represent when they cover big categories.
Also, it seems like some of the welfare considerations for factory farmed animals don’t account fully for regional laws, etc. (e.g. a country where debeaking is banned probably would have a different score for laying hens in battery cages than in ones where it isn’t). Do you think that the average life tends to be close enough to some minimum that these sorts of differences don’t end up mattering?
Also, as a small side note—you note in the broiler report that broilers are debeaked—that isn’t accurate generally, except for breeding stock, since broilers are killed at a pretty young age now, before pecking negatively impacts the flock.