I’m a Senior Researcher for Rethink Priorities and a Professor of Philosophy at Texas State University. I work on a wide range of theoretical and applied issues related to animal welfare. You can reach me here.
Bob Fischer
Thanks, Nick. A few quick thoughts:
It’s reasonable to think there are important differences between at least some insects and some of the smaller organisms under discussion on the Forum, like nematodes. See, e.g., this new paper by Klein and Barron.
I don’t necessarily want to give extra weight to net harm, as Michael suggested. My primary concern is to avoid getting mugged. Some people think caring about insects already counts as getting mugged. I take that concern seriously, but don’t think it carries the day.
I’m generally skeptical of Forum-style EV maximization, which involves a lot of hastily-built models with outputs that are highly sensitive to speculative inputs. When I push back against EV maximization, I’m really pushing back against EV maximization as practiced around here, not as the in-principle correct account of decision-making under uncertainty. And when I say that I’m into doing good vs. doing good in expectation, that’s a way of insisting, “I am not going to let highly contentious debates in decision theory and normative ethics, which we will never settle and on which we will all change our minds a thousand times if we’re being intellectually honest, derail me from doing the good that’s in front of me.” You can disagree with me about whether the “good” in front of me is actually good. But as this post argues, I’m not as far from common sense as some might think.
FWIW, my general orientation to most of the debates about these kinds of theoretical issues is that they should nudge your thinking but not drive it. What should drive your thinking is just: “Suffering is bad. Do something about it.” So, yes, the numbers count. Yes, update your strategy based on the odds of making a difference. Yes, care about the counterfactual and, all else equal, put your efforts in the places that others ignore. But for most people in most circumstances, they should look at their opportunity set, choose the best thing they think they can sweat and bleed over for years, and then get to work. Don’t worry too much about whether you’ve chosen the optimal cause, whether you’re vulnerable to complex cluelessness, or whether one of your several stated reasons for action might lead to paralysis, because the consensus on all these issues will change 300 times over the course of a few years.
Thanks, Michael. I’m quite sympathetic to the idea of bracketing!
Lastly, this article is good. The possibility the they’re right is one of the things that makes me inclined to see insects as the limit case.
Thanks, all. Let me add something that may help clarify why we’re always at loggerheads. I’m not actually thinking about these questions in probabilistic terms at all. In my view, the evidential situation for most arthropods is so sparse that I don’t actually believe we’re in a position to assign meaningful probabilities of sentience—even extremely rough ones. We’re squarely in the domain of the precautionary, not the probabilistic. When the evidence is this patchy and the mechanisms this poorly understood, numerical probability assignments feel more like artifacts of modeling choices than reflections of the world. So, when I talk about “robustness,” I’m not covertly appealing to narrower or wider probability distributions; I’m saying that the entire framework of attaching numbers to these uncertainties feels inappropriate.
This is one of several reasons why focusing on well-studied insects makes sense to me. It’s not that I think BSF larvae are 10× or 100× more likely to be sentient than springtails. It’s that we have a type of evidence for some insects—convergent behavioral, physiological, and neuroanatomical findings—that simply doesn’t exist at all for mites, springtails, and nematodes. And without that evidential base, I’m wary of using a first-pass model to set priorities. Expected value becomes extremely fragile under those conditions, as the inputs aren’t grounded: they’re guesses stacked on guesses.
So the way I think about prioritization has less to do with estimated probabilities and more to do with where precautionary reasoning can actually get traction. Work on farmed and research arthropods produces immediate welfare improvements, helps develop welfare indicators, and builds the scientific ecosystem we’ll need if we ever hope to understand smaller arthropods. That’s a much more stable basis for action than trying to set priorities via BOTECs.
Anyway, we’ll just have to agree to disagree, as we just keep running up against the same issues over and over!
I’m sorry that I don’t have time to respond to all your questions, Vasco. The short version, though, is that I also want robustness in the case for sentience, so I’m much less inclined to make the kinds of extrapolations you’re suggesting here. I have the same view about our moral weight work: I put very little stock in any specific numbers, as I think that plausible moral weights will be defensible from several angles, each of which will suggest somewhat different estimates, with no obvious right way to aggregate them. (Again, there’s that skepticism about expected value!)
Caring about Bugs Isn’t Weird
Arthropoda is a 501(c)(3). As this thread indicates, Mal Graham and I run the organization. We keep a lean profile because many science funders keep a lean profile. I realize that it isn’t optimal for fundraising, but I think it’s normal enough from the perspective of our grantees. If you’d like to discuss further, happy to chat.
I agree with Mal about Arthropoda being a good bet for this work. RP would be good too. On the macro-level issue of priorities, I’ve gathered some of my thoughts here.
Finally, I’ll say publicly what I’ve said privately: thank you for supporting Arthropoda. It means a lot to me that you donated.
Thanks, Vasco! Abraham’s post covers many more farmed insects than BSF and mealworms. (For instance, the lower end of his farmed cochineal estimate is 4.6T deaths annually.) When you include those other species, I think the “rounding error” claim becomes more plausible. (Sorry not to be clear in the post: I probably gave the impression that I was only thinking of the standard “insects as food and feed” species.)
In the interest of clarity, I’ve updated the original post in response to @Hugh P’s helpful question.
Yes: if you’re right! But that’s an awfully big bet. As you might expect, it isn’t one I’m prepared to make. And I’m not sure it’s one you should be prepared to make either, as your credence in this view would need to be extremely high to justify it.
In any case, thank you for the detailed reply. I have a much better understanding of our disagreement because of it.
Thanks for your comment, Dennis. One worry here is that you might be holding work on animal minds to an impossible standard. Yes, no one has a way to detect qualia directly, but surely we can make some inferences across the species boundary. Minimally, it seems very plausible that Neanderthals were sentient—and they, of course, were not homo sapiens. What makes that so plausible? Well, all the usual evidence: behavioral similarities, neurophysiological similarities, the lack of a plausible evolutionary story about why sentience would only have emerged after Neanderthals, etc.
Admittedly, plausibility decreases as phylogenetic distance grows (though the rate of the change is up for debate). Still, our epistemic situation seems to differ in degree, not in kind, when we consider stags—and, I submit, stag beetles.
One way to appreciate the value of the evidence that you’re criticizing is to imagine it away. Suppose that Bateson and Bradshaw had not found “the measurable quantities denoted… by the words ‘stress’ and ‘agony’ (such as enzyme levels in the bloodstream).” Surely it would have been less reasonable to believe that stags suffer in those circumstances. But if it would have been less reasonable without that evidence, it can be more reasonable with it.
To a first approximation, all farmed animals are bugs
[Question] Cause prio cruxes in 2026?
All very helpful, Guillaume! Thanks for the quick reply.
Thanks for this thoughtful post, Guillaume! I appreciate your work here. Out of curiosity, how plausible do you think it is that some of those physiological markers of stress are region-specific—i.e., increased mucus production just on the left side (if that’s the one that’s injured) or something like that. The point of the flexible self-protection criterion is, in part, to assess whether the response is targeted in a way that suggests some kind of self-representation. Obviously, region-specific mucus production is not good evidence on its own of self-representation, but it’s more interesting than the alternative!
Hi Vasco,
Several quick clarifications.
1. It’s true that we don’t think you can take our methodology and extend it arbitrarily. We grant that it’s very difficult to draw a precise boundary. However, it’s standard to develop a model for a purpose and be wary about its application in a novel context. Very roughly, we take those novel contexts to be ones where the probability of sentience is extremely low. We acknowledge that we don’t have a precise cutoff for “extremely low,” as establishing such a cutoff would be a difficult research project in its own right. There are unavoidable judgment calls in this work.
2. RP has done lots of work on animal sentience. It is not all the Moral Weight Project. It is not all connected and integrated. And some of our earliest MWP ideas are ones we later abandoned. What we stand behind now is really just what we published in the book. It is not fair to ask us to tell a coherent story about disconnected projects with different purposes, as well as all stages of the same project, given that different teams worked on these projects and the evolving understanding of people on a team for a given project.
3. We don’t think that the assumptions of our “mainline welfare ranges” imply anything about the welfare ranges of plants, nematodes, and microorganisms, as the models simply aren’t intended to be used the way you are using them. That’s why we aren’t replying to you about the welfare ranges of plants, nematodes, and microorganisms. We would need to do an independent project to form opinions on your questions. Right now, we don’t have the funding for that project.
4. It’s understandable that you’re skeptical of our specific welfare range estimates. We, of course, are also skeptical of those precise numbers. That’s why we’ve long encouraged people to focus on the order of magnitude estimates. We also disagree that they “are bound to be relatively close to 1.” A few orders of magnitude lower than 1 is not close to 1, at least by one reasonable interpretation of “relatively close.” Laura has already discussed this elsewhere.
For what it’s worth, I think you’re approaching the Moral Weight Project as something it is not. You are treating it as a general methodology where we can enter some information about the abilities of a system—whatever that system happens to be—and get out moral weights that we can use in expected value calculations for cause prioritization. But we did not try to produce a maximally general methodology. We tried to produce something useful for updating on the kinds of questions that decision-makers actually face: “Do layer hens matter so much more than carp that, despite the differences in population sizes, you should prioritize layers?” “Can we rule out insects entirely?” “If your job requires you to apply a discount rate to the welfare of some animals relative to others, what kinds of numbers should you consider?” “Is there a good reason for thinking that, even if humans don’t literally have lexical priority over animals, they have that kind of priority for practical purposes?” And so on. I do think that the MWP is useful for shedding some light on these questions for some actors. Beyond that, we should be cautious with the outputs. And mostly, we should try to do better, as we only meant to issue the first word.
- Prioritise soil animals over farmed invertebrates? by (15 Nov 2025 9:58 UTC; 46 points)
- Total number of neurons and welfare of animal populations by (11 Oct 2025 8:57 UTC; 35 points)
- Saving human lives cheaply is the most cost-effective way of increasing animal welfare? by (7 Aug 2025 16:29 UTC; 23 points)
- More animal farming increases animal welfare if soil animals have negative lives? by (26 Oct 2025 9:21 UTC; 15 points)
- Increasing the welfare of soil animals will remain much more cost-effective than increasing digital welfare over at least the next few decades? by (21 Sep 2025 8:30 UTC; 8 points)
- 's comment on Effects on microorganisms are much larger than those on animals and plants under the assumptions of Rethink Priorities’ mainline welfare ranges? by (29 Jul 2025 7:26 UTC; 4 points)
Thanks to everyone for the discussion here. A few replies to different strands.
First, I agree with Vasco that transparency matters. However, transparency isn’t the only good—and, unfortunately, it often competes with others. (Time is limited. Optics are complicated. Etc.) So, by Vasco’s own lights, it’s only plausible that organizations should devote scarce resources to answering this particular cause prioritization question—and then post their answer publicly on the Forum—if they think (or should think) that the expected value of so doing is positive. It isn’t obvious that anyone in these organizations thinks (or should think) that’s true.
Second, you can use our work on welfare ranges without buying into naive expected utility maximization. I assume that many people who use our welfare ranges are averse to being mugged and, as a result, adopt one of the many strategies for avoiding that outcome. So, it can be true that (a) the expected value of impacts on some group of animals is very large in expectation and (b) you aren’t rationally required, by your own lights, to care much about that fact (and, by extension, investigate it in depth or engage on it publicly).
Third, our models have a narrow theoretical and pragmatic purpose: we wanted to improve the community’s thinking about cause prioritization regarding a group of animals where we took there to be good evidence of sentience. We don’t think you can take our models and apply them generally, nor do we think you can ignore the specific purpose for which they were developed. Put differently, once some animals have crossed some threshold of plausibility for sentience, we support using our models with trepidation, largely because we don’t have better options. But you shouldn’t apply the model beyond that and, if you have any other principled ways to make decisions, that’s probably better. (Principled: “We think that any theory of change for the smallest animals begins with key victories for larger animals.” Unprincipled: “We don’t like thinking about the smallest animals.”)
Fourth, we disagree with @NickLaing characterization of the Moral Weight Project as stacking the deck in favor of high welfare range estimates. There are two reasons why. One of them is that the MWP does not say, “Sum the number of proxies found for a species and divide by the total number of proxies to get the welfare range.” If that were true, then the number of proxies would straightforwardly determine the maximum difference in welfare ranges. But that isn’t correct. We have models (like the cubic model) where you need to have lots of proxies before you have a “highish” welfare range. However, we have lots of models, with uncertainty across them. Predictably, then, more moderate estimates emerge rather than any extreme (whether high or low). Someone is free to say: “A better methodology wouldn’t have been so uncertain about the models; it would have just included animal-unfriendly options.” That’s clearly tendentious, though, and we think we made the right call in including a wider range of theoretical options. That being said, we’ll reiterate that those who are interested in the details of the project should examine the particulars of each model and its conclusions rather than just taking the overall estimates straightforwardly. You can find each model’s results here.
The second reason we disagree with Nick’s characterization of the MWP is that, even if you isolate a particular model, you don’t automatically get high welfare ranges. Suppose, for instance, that there are 80 proxies total and that a model uses them all. If there were N that were as simple as “any pain-averse behavior,” then, for the core models of the MWP, saying “likely yes” to each of them would give you a sentience-conditioned welfare score of 0.875*N/80 on average. We didn’t consider animals as simple as nematodes in the MWP because we didn’t think that the methods were robust for that type of animal. (See above.) But say you think there’s a 0.5% chance of sentience for nematodes. Then, the sentience-conditioned welfare range would have been approximately 0.005*0.875*N/80. If the average model had 5 proxies that are as simple as “any pain averse behavior” and we gave “likely yes” to nematodes on all five, that would generate a mean welfare range of 0.005*0.875*5/80 = 0.00027. Again, we don’t endorse using the MWP for animals with that small of a probability of sentience, but 0.00027 isn’t a particularly high welfare range. (And as we’ve said many times, we’re just talking about hedonic capacity, not “all things considered moral weights,” which don’t assume hedonism. That number would be lower still.)
Should we find funding for a second version of the project, we’re likely to take a different approach to aggregating the proxies to produce welfare ranges, aggregating welfare ranges across models, and communicating the results. Still, we hope the first version of MWP contributes to more informed and systematic thinking about how to prioritize among different interventions.
Hi Nick. Thanks for the kind words about the MWP. We agree that it would be great to have other people tackling this problem from different angles, including ones that are unfriendly to animals. We’ve always said that our work was meant to be a first pass, not the final word. A diversity of perspectives would be valuable here.
For what it’s worth, we have lots of thoughts about how to extend, refine, and reimagine the MWP. We lay out several of them here. In addition, we’d like to adapt the work we’ve been doing on our Digital Consciousness Model for the MWP, which uses a Bayesian approach. Funding is, and long has been, the bottleneck—which explains why there haven’t been many public updates about the MWP since we finished it (apart from the book, which refines the methodology in notable ways). But if people are interested in supporting these or related projects, we’d be very glad to work on them.
I’ll just add: I’ve long thought that one important criticism of the MWP is that it’s badly named. We don’t actually give “moral weights,” at least if that phrase is understood as “all things considered assessments of the importance of benefiting some animals relative to others” (whether human or nonhuman). Instead, we give estimates of the differences in the possible intensities of valenced states across species—which only double as moral weights given lots of contentious assumptions.
All things considered assessments may be possible. But if we want them, we need to grapple with a huge number of uncertainties, including uncertainties over theories of welfare, operationalizations of theories of welfare, approaches to handling data gaps, normative theories, and much else besides. The full project is enormous and, in my view, is only feasible if tackled collaboratively. So, while I understand the call for independent teams, I’d much prefer a consortium of researchers trying to make progress together.
Strongly agree about “the evidential situation with respect to comparing the individual welfare per animal-year”! I’ve always taken the numbers from the MWP much less seriously than others. I see that work as one part of a large picture, depending heavily on other arguments.
And thank you for voting for Arthropoda!