I’m a Senior Researcher for Rethink Priorities, a Professor of Philosophy at Texas State University, a Director of the Animal Welfare Economics Working Group, the Treasurer for the Insect Welfare Research Society, and the President of the Arthropoda Foundation. I work on a wide range of theoretical and applied issues related to animal welfare. You can reach me here.
Bob Fischer
Let me renew my offer to talk. DM me for my Calendly link.
Sorry for the slow reply, Vasco. Here are the means you requested. My vote is that if people are looking for placeholder moral weights, they should use our 50th-pct numbers, but I don’t have very strong feelings on that. And I know you know this, but I do want to stress for any other readers that these numbers are not “moral weights” as that term is often used in EA. Many EAs want one number per species that captures the overall strength of their moral reason to help members of that species relative to all others, accounting for moral uncertainty and a million other things. We aren’t offering that. The right interpretation of these numbers is given in the main post as well as in our Intro to the MWP.
Thanks, Vasco!
Short version: I want to discourage people from using these numbers in any context where that level of precision might be relevant. That is, if the sign of someone’s analysis turns on three significant digits, then I doubt that their analysis is action-relevant.
As for medians rather than means, our main concern there was just that means tend to be skewed toward extremes. But we can generate the means if it’s important!
Finally, I should stress that I’m seeing people use these “moral weights” roughly as follows: “100 humans = ~33 chickens (100*.332= ~33).” This is not the way they’re intended to be used. Minimally, they should be adjusted by lifespan and average welfare levels, as they are estimates of welfare ranges rather than all-things-considered estimates of the strength of our moral reasons to benefit members of one species rather than another.
Thanks for all this, Nuno. The upshot of Jason’s post on what’s wrong with the “holistic” approach to moral weight assignments, my post about theories of welfare, and my post about the appropriate response to animal-friendly results is something like this: you should basically ignore your priors re: animals’ welfare ranges as they’re probably (a) not really about welfare ranges, (b) uncalibrated, and (c) objectionably biased.
You can see the posts above for material that’s relevant to (b) and (c), but as evidence for (a), notice that your discussion of your prior isn’t about the possible intensities of chickens’ valenced experiences, but about how much you care about those experiences. I’m not criticizing you personally for this; it happens all the time. In EA, the moral weight of X relative to Y is often understood as an all-things-considered assessment of the relative importance of X relative to Y. I don’t think people hear “relative importance” as “how valuable X is relative to Y conditional on a particular theory of value,” which is still more than we offered, but is in the right ballpark. Instead, they hear it as something like “how valuable X is relative to Y,” “the strength of my moral reasons to prioritize X in real-world situations relative to Y,” and “the strength of my concern for X relative to Y” all rolled into one. But if that’s what your prior’s about, then it isn’t particularly relevant to your prior about welfare-ranges-conditional-on-hedonism specifically.
Finally, note that if you do accept that your priors are vulnerable to these kinds of problems, then you either have to abandon or defend them. Otherwise, you don’t have any response to the person who uses the same strategy to explain why they assign very low value to other humans, even if the face of evidence that these humans matter just as much as they do.
Great question, Lucas. My hunch is that all the broad conclusions probably apply, though I’d want to think through the details more carefully before standing behind that claim. I suppose one thing that really affects my thinking is whether the organism has to navigate its environment in search of resources. My impression is that the youngest shrimp aren’t doing this; they’re just being carried along like plankton. So, that lowers my estimation of their capacities to something more like grubs than juvenile crickets. But of course I haven’t investigated this at all, so please don’t put too much weight on that hot take!
Happy to discuss if it would be helpful; feel free to DM me.
Hi LGS. A few quick points:
You don’t know what my intuitions about bees were before we began, nor what they are now. FWIW, I came into this project basically inclined to think of insects as little robots. Reading about them changed what I think I should say. However, my intuitions probably haven’t shifted that much. But as we’ve seen, I place less weight on my intuitions here than you do.
You’re ignoring what we say in the post: our actual views, which are informed by the models but not directly determined by them, are that the verts are within one OOM of humans and inverts are within 2 OOMs of the verts. The specific values are, as we indicate, just placeholders.
We tried to develop a methodology that makes our estimates depend on the state of empirical knowledge. I’ll be the first to point out its limitations. If we’re listing criticisms, I’m worried about things like generalizing within taxonomic categories, the difficulty of scoring individual proxies, and the problem of handling missing data—not “hiding our intuitions behind a complex model.”
I want to do better going forward. This is the first step in an iterative process. If you have concrete suggestions about how to improve the methodology, please let me know.
Thanks for reading, LGS. As I’ve argued elsewhere, utilitarianism probably leads us to say equally uncomfortable things with more modest welfare range estimates. I’m assuming you wouldn’t be much happier if we’d argued that 10 beehives are worth more than a single human. At some point, though, you have to accept a tradeoff like that if you’re committed to impartial welfare aggregation.
For what it’s worth, and assuming that you do give animals some weight in your deliberations, my guess is that we might often agree about what to do, though disagree about why we ought to do it. I’m not hostile to giving intuitions a fair amount of weight in moral reasoning. I just don’t think that our intuitions tell us anything important about how much other animals can suffer or the heights of their pleasures. If I save humans over beehives, it isn’t because I think bees don’t feel anything—or barely feel anything compared to humans. Instead, it’s because I don’t think small harms always aggregate to outweigh large ones, or because I give some weight to partiality, or because I think death is much worse for humans than for bees, or whatever. There are just so many other places to push back.
Appreciate the comment!
Re: further research priorities, there are “within paradigm” priorities and “beyond paradigm” priorities. As for the former, I think the most useful thing would be a more thorough investigation of theories of valence, as I think we could significantly improve the list of proxies and our scoring / aggregation methods if we had a better sense of which theories are most promising. As for the latter, my guess is that the most useful thing would be figuring out whether, given the hierarchicalism, there are any limits at all on discounting animal welfare simply because it belongs to animals. My guess is “No,” which is one of the problems with hierarchicalism, but it would be good to think this through more carefully.
Re: some animals having larger welfare ranges than humans, we don’t want to rule out this possibility, but we don’t actually believe it. And it’s worth stressing, as we stress here, that this possibility doesn’t have any radical implications on its own. It’s when you combine it with other moral assumptions that you get those radical implications.
Seems right, Larks. But I don’t set things up this way in the post—or didn’t mean to, anyway. I grant that he can have all the non-hedonic goods while being tortured for exactly the reason you mention. But then I still want to say: those non-hedonic goods don’t make him net positive.
FWIW, I’ve given this thought experiment to hard-core objective list theorists and they just bite the bullet, insisting that his life is well worth living even while being tortured. Clearly, then, we aren’t going to get agreement based on this thought experiment alone. However, I can’t help but think that they’re confusing meaningfulness with prudential goodness. I concede that a life could be meaningful in the face of torture—or even precisely because of it in some circumstances. But many meaningful lives are bad for the people who live them, which is partly why they’re heroic for continuing them.
Anyway, hard issues!
Thanks, Matt. As we say, though, we don’t actually think that bees beat salmon. We think that the vertebrates are 0.1 or better of humans, that the vertebrates themselves are within 2x of one another, and that the invertebrates are within 2 OOMs of the vertebrates. We fully recognize that the models are limited by the available data about specific taxa. We aren’t going to fudge the numbers to get more intuitive results, but we definitely don’t recommend using them uncritically.
I hear—and sometimes share—your skepticism about such human/animal tradeoffs. As we argue in a previous post, utilitarianism is indeed to blame for many of these strange results. Still, it could be the best theory around! I’m genuinely unsure what to think here.
Thanks for the kind words, Aaron!
Sorry for the delay, MHR! It took a bit to get to the bottom of this. In any case, the short version is that the 8-13M neuron count for both salmon and carp should be read as the lowest reasonable estimate, not our best guess. We got the number from the zebrafish literature—specifically, a study by Hinsch & Zupanc (2007) (cited in the table) who reported that the total number of brain cells for adult zebrafish varied between 8 and 13 million. In the notes associated with the Welfare Range Table, we had a caveat that neuron counts are very hard to come by in fish and, in any case, only represent a snapshot in time, because the teleost brain is constantly growing. Moreover, no one has done total neuron count estimates for salmon or carp, whereas zebrafish are often used as a model species and are well-studied; so, we simply used those values as a placeholder. Granted, then, the 8-13M number may well be an underestimate due to the size differences between zebrafish and salmon, and we do see the appeal of using Invincible Wellbeing’s curve fits to come up with a higher number. However, we tried to stick as close to the empirical literature as possible. And truth be told, because neuron counts are just one of several models we include, using a higher number wouldn’t make a major difference to our welfare range estimates for salmon or carp.
The upshot is that is one of many cases where our methodology is more conservative than many EAs have been when doing related projects (e.g., we were more inclined to default to “unknown,” we used lower-bound placeholder values in some cases, etc.). Advantages and disadvantages!
Thanks for your comment, Ariel, and sorry for the slow reply! What you’ve described sounds great as far as it goes. However, my basic view here—which I offer with sincere appreciation for the project you’re describing and a genuine desire to see it completed—is that the uncertainties are so far-reaching that, while we can get clearer about the conditions under which, say, a negative utilitarian will condemn bivalve consumption, we basically have no idea which condition we’re in. So, I think that the most valuable thing right now would be to write up specific empirical research questions and value-aligned ways of operationalizing the key concepts. Then, we should be hunting for graduate students and early-career researchers who might be willing to do the empirical work in exchange for relatively small amounts of funding. (Many academics are cheap dates.) From my perspective, EA has gone just about as far as it can already on these kinds of questions without more substantive collaborations with entomologists, aquatic biologists, ecologists, and so on.
All that said, I’ll stress that I completely agree with you about the importance of getting answers here! I just think we’re at the point where we can’t make much more progress toward them from the armchair.
Appreciate the support!
Fantastic questions, Lizka! And these images are great. I need to get much better at (literally) illustrating my thinking. I very much appreciate your taking the time!
Here are some replies:
Replacing an M with an N. This is a great observation. Of course, there may not be many real-life cases with the structure you’re describing. However, one possibility is in animal research. Many people think that you ought to use “simpler” animals over “more complex” animals for research purposes—e.g., you ought to experiment on fruit flies over pigs. Suppose that fruit flies have smaller welfare ranges than pigs and that both have symmetrical welfare ranges. Then, if you’re going to do awful things to one or the other, such that each would be at the bottom of their respective welfare range, then it would follow that it’s better to experiment on fruit flies.
Assessing the neutral point. You’re right that this is important. It’s also really hard. However, we’re trying to tackle this problem now. Our strategy is multi-pronged, identifying various lines of evidence that might be relevant. For instance, we’re looking at the Welfare Footprint Data and trying to figure out what it might imply about whether layer hens have net negative lives. We’re looking at when vets recommend euthanasia for dogs and cats and applying those standards to farmed animals. We’re looking at tradeoff thought experiments and some of the survey data they’ve generated. And so on. Early days, but we hope to have something on the Forum about this over the summer.
Symmetry vs. asymmetry. This is another hard problem. In brief, though, we take symmetry to be the default simply because of our uncertainty. Ultimately, it’s a really hard empirical question that requires time we didn’t have. (Anyone want to fund more work on this!?) As we say in the post, though, it’s a relatively minor issue compared to lots of others. Some people probably think that we’re orders of magnitude off in our estimates, whereas symmetry vs. asymmetry will make, at most, a 2x difference to the amount of welfare at stake. That isn’t nothing, but it probably won’t swing the analysis.
The “caged vs. cage-free chicken / carp vs. salmon” examples. This is a great question. We’ve done a lot on this, though none of it’s publicly available yet. Basically, though, you’re correct about the information you’d want. Of course, as your note indicates, we don’t care about natural lifespan; we care about time to slaughter. And while it’s very difficult to know where an animal is in its welfare range, we don’t think it’s in principle inestimable. Basically, if you think that caged hens are living about the worst life a chicken can live, you say that they’re at the bottom end of their welfare range. And if you think cage-free hens have net negative lives, but they’re only about half as badly off as they could be, then can infer that you’re getting a 50% gain relative to chickens’ negative welfare range in the switch from caged to cage-free. And so on. This is all imperfect, but at least it provides a coherent methodology for making these assessments. Moreover, it’s a methodology that forces us to be explicit about disagreements re: the neutral point and the relative welfare levels of animals in different systems, which I regard as a good thing.
Sorry about the confusion, mvolz. The table with the models is tricky to navigate. Here’s the one we shared originally, which is clearer. Short answer: yes, we said it was present.
Thanks for your questions, Stan. Travis wrote the piece on axiological asymmetries and he can best respond on that front. FWIW, I’ll just say that I’m not convinced that there’s a difference of an order of magnitude between the best pleasure and the worst pain—or any difference at all—insofar as we’re focused on intensity per se. I’m inclined to think it’s just really hard to say and so I take symmetry as the default position. For all that, I’m open to the possibility that pleasures and pains of the same intensity have different impacts on welfare, perhaps because some sort of desire satisfaction theory of welfare is true, we’re risk-averse creatures, and we more strongly dislike signs of low fitness than the alternative. Point is: there may be other ways of accommodating your intuition than giving up the symmetry assumption.
To your main question, we distinguish the negative and positive portions of the welfare range because we want to sharply distinguish cases where the interventions flips the life from net negative to net positive. Imagine a case where an animal has a symmetrical welfare range and an intervention moves the animal either 60% of their negative welfare range or 60% of their total welfare range. In the former case, they’re still net negative; in the latter case, they now net positive. If you’re a totalist, that really matters: the “logic of the larder” argument doesn’t go through even post-intervention in the former case, whereas it does go through in the latter.
Thanks for the kind words about the project, Joel! Thanks too for these thoughtful and gracious comments.
1. I hear you re: the quantitative proxy model. I commissioned the research for that one specially because I thought it would be valuable. However, it was just so difficult to find information. To even begin making the calculations work, we had to semi-arbitrarily fill in a lot of information. Ultimately, we decided that there just wasn’t enough to go on.
2. My question about non-hedonist theories of welfare is always the same: just how much do non-hedonic goods and bads increase humans’ welfare range relative to animals’ welfare ranges? As you know, I think that even if hedonic goods and bads aren’t all of welfare, they’re a lot of it (as we argue here). But suppose you think that non-hedonic goods and bads increase humans’ welfare range 100x over all other animals. In many cost-effectiveness calculations, that would still make corporate campaigns look really good.
3. I appreciate your saying this. I should acknowledge that I’m not above motivated reasoning either, having spent a lot of the last 12 years working on animal-related issues. In my own defense, I’ve often been an animal-friendly critic of pro-animal arguments, so I think I’m reasonably well-placed to do this work. Still, we all need to be aware of our biases.
4. This is a very interesting result; thanks for sharing it. I’ve heard of others reaching the same conclusion, though I haven’t seen their models. If you’re willing, I’d love to see the calculations. But no pressure at all.
Great question, Tobias. Yes, less research on a species generally reduces our welfare range estimate. I agree with you that it would be better, in some sense, to have our confidence increase in a fixed estimate rather than having the estimates themselves vary. However, we couldn’t see how to do that without invoking either our priors (which we don’t trust) or some other arbitrary starting point (e.g., neuron counts, which we don’t trust either). In any case, that’s why we frame the estimates as placeholders and give our overall judgments separately: vertebrates at 0.1 or better, the vertebrates themselves within 2x of one another, and the invertebrates within 2 OOMs of the vertebrates.
Great question, Michael. It’s probably fine to use the silkworm estimates for this purpose.