I’m a Senior Researcher for Rethink Priorities, a Professor of Philosophy at Texas State University, a Director of the Animal Welfare Economics Working Group, the Treasurer for the Insect Welfare Research Society, and the President of the Arthropoda Foundation. I work on a wide range of theoretical and applied issues related to animal welfare. You can reach me here.
Bob Fischer
Yes, Chris: we’re using a cardinal scale. To your point about estimating the average realized values of welfare, I agree that this would be highly valuable. Animal welfare scientists don’t do it because they don’t face decisions that require it. If you’re primarily responsible for studying broiler welfare, you don’t need to know how to compare broiler welfare with pig welfare. You just need to know what to recommend to improve broiler welfare. As for RP, we’d love to work on this and I’ve proposed such projects many times. However, this work has never been of sufficient interest to funders. If that changes, you can bet I’ll devote a lot of time to it!
That’s a tough one, Chris. I assume you’re looking for something like, “On a −1 to 1 scale, the average welfare of broiler chickens is −0.7, the average welfare of pigs is −0.1, the average welfare of cattle is 0.2, etc.” Is that right? The closest thing to that would be the scores that Norwood and Lusk give in Compassion by the Pound, though not for shrimp, and I also tend to think that their numbers skew high. For the most part, animal welfare scientists aren’t interested in scoring welfare on a cardinal scale, so it’s an oddity when they try. (Marc Bracke is one exception, though I don’t think you’re going to get what you want from his papers either.) I’m sorry that I can’t be of more help!
Great question, Michael. Short answer: there are bound to be lots of valuable research projects after these three; so, we’d hold the funds until we found a lab that’s willing and able to take on a sufficiently impactful project. One long-term goal is to support the many foundational research projects that need to be done on insect welfare. When we consider the sheer number of species (1M described; probably 5.5M in total) and the range of ways humans affect insects, it’s clear that we need a wide set of validated welfare indicators to make judgments about how best to help these animals.
Thanks for your question, Oscar. We have some applications outstanding but don’t know how much funding they’ll generate. In general, animal welfare funding is quite tight, with lots of worthy projects going unsupported. So, we’re almost certain to have more opportunities than resources. And given how new the field of insect welfare science is, we anticipate this problem continuing, as there are bound to be many other high-value projects like these in the coming year(s).
Answering on behalf of Arthropoda Foundation. We’ve summarized our funding priorities here. Everything we raise will go toward funding insect welfare science (as we have no staff or overhead), with a particular focus on humane slaughter, nutrition and living conditions, and implementable welfare assessment tools.
Support Insect Welfare
OP funded several scientists working on insect sentience and welfare. Arthropoda Foundation was formed to centralize and assist in the funding situation for those scientists. However, we’ve not yet replaced all the funding from GVF. For more on our funding priorities, see our post for Marginal Funding Week.
I really appreciate your work, Richard, and over the last few years, I’ve loved the opportunity to work on some foundational problems myself. Increasingly, though, I’d like to see more philosophers ignore foundational issues and focus on what I think of as “translational philosophy.” Is anyone going to give a new argument for utilitarianism that significantly changes the credences of key decision-makers (in whatever context)? No, probably not. But there are a million hard questions about how to make existing policies and decision-making tools more sensitive to the requirements of impartial beneficence. I think the model should be projects like Chimpanzee Rights vs., say, the kinds of things that are likely to be published in top philosophy journals.
I don’t have the bandwidth to organize it myself right now, but I’d love there to be something like a “Society for Translational Philosophy” that brings like-minded philosophers together to work on more practical problems. There’s a ton of volunteer labor in philosophy that could be marshaled toward good ends; instead, it’s mostly frittered away on passion projects (which I say as someone who has frittered an enormous amount of time away on passion projects; my CV is chaos). A society like that could be a very high-leverage opportunity for a funder, as a small amount spent on infrastructure could produce a lot of value in terms of applicable research.
Thank you, Maya!
Really appreciate this, Aaron! Very good of you!
Thanks so much!
Thanks for asking, Nick! Although we tried to make it as accessible as possible, it’s still pitched to academics first and foremost. For those who just want the big picture, this podcast episode is probably the best option right now. We’re also working on an article-length overview, but it may be a few months before that’s available. I’ll share it here when it is!
Hi Josh. There are two issues here: (a) the indirect effects of helping humans (to include the potential that humans have to make a positive impact) and (b) the positive portion of human and animals’ welfare ranges. We definitely address (b), in that we assume that every individual with a welfare range has a positive dimension of that welfare range. And we don’t ignore that in cost-effectiveness analysis, as the main benefit of saving human lives is allowing/creating positive welfare. (So, averting DALYs is equivalent to allowing/creating positive welfare, at least in terms of the consequences.)
We don’t say anything about (a), but that was beyond the scope of our project. I’m still unsure how to think about the net indirect effects of helping humans, though my tendency is to think that they’re positive, despite worries about the meat-eater problem, impacts on wild animals, etc. (Obviously, the direct effects are positive!) Others, however, probably have much more thoughtful takes to give you on that particular issue.
Thanks, Nick, both for your very kind words about our work and for raising these points. I’ll offer just a few thoughts.
You raise some meta-issues and some first-order issues. However, I think the crux here is about how to understand what we did. Here’s something I wrote for a post that will come out next week:
Why did a project about “moral weight” focus on differences in capacity for welfare? Very roughly, a moral weight is the adjustment that ought to be applied to the estimated impact of an animal-focused intervention to make it comparable to the estimated impact of some human-focused intervention. Given certain (controversial) assumptions, differences in capacity for welfare just are moral weights. But in themselves, they’re something more modest: they’re estimates of how well and badly an animal’s life can go relative to a human’s. And if we assume hedonism—as we did—then they’re something more modest still: they’re estimates of how intense an animal’s valenced states can be relative to a human’s. The headline result of the Moral Weight Project was something like: “While humans and animals differ in lots of interesting ways, many of the animals we farm can probably have pains that aren’t that much less intense than the ones humans can have.”
I don’t think you’ve said anything that should cause someone to question that headline result. To do that, we’d want some reason to think that a different research team would conclude that chickens feel pain much less intensely than humans, some reason to think that neuron counts are good proxies for the possible intensities of pain states across species, or some principled way of discounting behavioral proxies (which we should want, as we otherwise risk allowing our biases to run wild). In other words, we’d want more on the first-order issues.
To be fair, you’re quite clear about this. You write:
I present four critical junctures where I think the Moral Weights project favored animals. I don’t argue that any of their decisions are necessarily wrong, only that each decision shifts the project outcome in an animal-friendly direction and sometimes by at least an order of magnitude.
But the ultimate question is whether our decisions were wrong, not whether they can be construed as animal-friendly. That’s why the first-order issues are so important. So, for instance, if we should have given more weight to neuron counts, so be it: let’s figure out why that would be the case and what the weight should be. (That being said, we could up the emphasis on neuron counts considerably without much impact on the results. Animal/human neuron counts ratios aren’t vanishingly low. So, even if they determined a large portion of the overall estimates, we wouldn’t get differences of the kind you’ve suggested. In fact, you could assign 20% of your credence to the hypothesis that animals have welfare ranges of zero: that still wouldn’t cut our estimates by 10x.)
All that said, you might contest that the headline result is what I’ve suggested. In fact, people on the Forum are using our numbers as moral weights, as they accept (implicitly or explicitly) the normative assumptions that make moral weights equivalent to estimates of differences in the possible intensities of valenced states. If you reject those assumptions, then you definitely shouldn’t use our numbers as moral weights. That being said, if you think that hedonic goods and bads are one component of welfare, then you should use our numbers as a baseline and adjust them. So, on one level, I think you’re operating in the right way: I appreciate the attempt to generate new estimates based on ours. However, that too requires a bunch of first-order work, which we took up when we tried to figure out the impact of assuming hedonism. You might disagree with the argument there. But if so, let’s figure out where the argument goes wrong.
One final point. I agree—and have always said—that our numbers are provisional estimates that I fully expect to revise over time. We should not take them as the last word. However, the way to make progress is to engage with hard philosophical, methodological, and empirical problems. What’s a moral weight in the first place? Should we be strict welfarists when estimating the cost-effectiveness of different interventions? How should we handle major gaps in the empirical literature? Is it reasonable to interpret the results of cognitive biases as evidence of valenced states? How much weight should we place on our priors when estimating the moral importance of members of other species? And so on. I’m all for doing that work.
I’m encouraged by your principles-first focus, Zach, and I’m glad you’re at the helm of CEA. Thanks for all you’re doing.
Thanks for your question, Nathan. We were making programmatic remarks and there’s obviously a lot to be said to defend those claims in any detail. Moreover, we don’t mean to endorse every claim in any of the articles we linked. However, we do think that the worries we mentioned are reasonable ones to have; lots of EAs can probably think of their own examples of people engaging in motivated reasoning or being wary about what evidence they share for social reasons. So, we hope that’s enough to motivate the general thought that we should take uncertainty seriously in our modeling and deliberations.
Thanks, Deborah. Derek Shiller offered an answer to your question here.
Good question! Re: the Moral Weight Project, perhaps the biggest area of impact has been on animal welfare economics, where having a method to make interspecies comparisons is crucial for benefit-cost analysis. Many individuals and organizations have also reported to us that our work was an update on the importance of animals and on invertebrates specifically. We’ve seen something similar with the CCM tool, with results ranging from positive feedback and enthusiasm to more concrete updates in their decisions. There’s more we can say privately than publicly, however, so please feel free to get in touch if you’d like to chat!
The thought is that we think of the Conscious Subsystems hypothesis as a bit like panpsychism: not something you can rule out, but a sufficiently speculative thesis that we aren’t interested in including it, as we don’t think anyone really believes it for empirical reasons. Insofar as they assign some credence to it, it’s probably for philosophical reasons.
Anyway, totally understand wanting every hypothesis over which you’re uncertain to be reflected in your welfare range estimates. That’s a good project, but it wasn’t ours. But fwiw, it’s really unclear what that’s going to imply in this particular case, as it’s so hard to pin down which Conscious Subsystems hypothesis you have in mind and the credences you should assign to all the variants.