I’m a Senior Research Manager at Rethink Priorities, an Associate Professor of Philosophy at Texas State University, and the Director of the Society for the Study of Ethics & Animals.
Bob Fischer
Thanks for all this, Hamish. For what it’s worth, I don’t think we did a great job communicating the results of the Moral Weight Project.
As you rightly observe, welfare ranges aren’t moral weights without some key philosophical assumptions. Although we did discuss the significance of those assumptions in independent posts, we could have done a much better job explaining how those assumptions should affect the interpretation of our point estimates.
Speaking of the point estimates, I regret leading with them: as we said, they’re really just placeholders in the face of deep uncertainty. We should have led with our actual conclusions, the basics of which are that the relevant vertebrates are probably within an OOM of humans and shrimps and the relevant adult insects are probably within two OOMs of the vertebrates. My guess is that you and I disagree less than you might think about the range of reasonable moral weights across species, even if the centers of my probability masses are higher than yours.
I agree that our methodology is complex and hard to understand. But it would be surprising if there were a simple, easy-to-understand way to estimate the possible differences in the intensities of valenced states across species. Likewise, I agree that “there are tons of assumptions and simplifications that go into these RP numbers, so any conclusions we can draw must be low confidence.” But there are also tons of assumptions and biases that go into our intuitive assessments of the relative moral importance of various kinds of nonhuman animals. So, a lot comes down to how much stock you put in your intuitions. As you might guess, I think we have lots of reasons not to trust them once we take on key moral assumptions like utilitiarianism. So, I take much of the value of the Moral Weight Project to be in the mere fact that it tries to reach moral weights from first principles.
It’s time to do some serious surveying to get a better sense of the community’s moral weights. I also think there’s a bunch of good work to do on the significance of philosophical / moral uncertainty here. I If anyone wants to support this work, please let me know!
Hi Sabs. We can discuss this a bit in a comment thread, but the issues here are complicated. If you’d like to have a conversation, I’m happy to chat. Please DM me for a link to my calendar.
Brief replies to your questions:
I think you matter an enormous amount too. I am not saying this facetiously. It’s probably the thing I believe most deeply.
I don’t know how much the median EA thinks you matter.
I’m unsure about all four assumptions. However, I’m also unsure about their practical importance. You might not be comfortable with the results of any cross-species cost-effectiveness analysis.
If it’s you or a hundred chickens, I’d save you. I’d also save my children over a hundred (human) strangers. I don’t think this means that my children realize more welfare than those strangers. Likewise, I don’t think you realize 100x more welfare than a chicken can.
Thanks for your discussion of the Moral Weight Project’s methodology, Carl. (And to everyone else for the useful back-and-forth!) We have some thoughts about this important issue and we’re keen to write more about it. Perhaps 2024 will provide the opportunity!
For now, we’ll just make one brief point, which is that it’s important to separate two questions. The first concerns the relevance of the two envelopes problem to the Moral Weight Project. The second concerns alternative ways of generating moral weights. We considered the two envelopes problem at some length when we were working on the Moral Weight Project and concluded that our approach was still worth developing. We’d be glad to revisit this and appreciate the challenge to the methodology.
However, even if it turns out that the methodology has issues, it’s an open question how best to proceed. We grant the possibility that, as you suggest, more neurons = more compute = the possibility of more intense pleasures and pains. But it’s also possible that more neurons = more intelligence = less biological need for intense pleasures and pains, as other cognitive abilities can provide the relevant fitness benefits, effectively muting the intensities of those states. Or perhaps there’s some very low threshold of cognitive complexity for sentience after which point all variation in behavior is due to non-hedonic capacities. Or perhaps cardinal interpersonal utility comparisons are impossible. And so on. In short, while it’s true that there are hypotheses on which elephants have massively more intense pains than fruit flies, there are also hypotheses on which the opposite is true and on which equality is (more or less) true. Once we account for all these hypotheses, it may still work out that elephants and fruit flies differ by a few orders of magnitude in expectation, but perhaps not by five or six. Presumably, we should all want some approach, whatever it is, that avoids being mugged by whatever low-probability hypothesis posits the largest difference between humans and other animals.
That said, you’ve raised some significant concerns about methods that aggregate over different relative scales of value. So, we’ll be sure to think more about the degree to which this is a problem for the work we’ve done—and, if it is, how much it would change the bottom line.
I agree with Ariel that OP should probably be spending more on animals (and I really appreciate all the work he’s done to push this conversation forward). I don’t know whether OP should allocate most neartermist funding to AW as I haven’t looked into lots of the relevant issues. Most obviously, while the return curves for at least some human-focused neartermist options are probably pretty flat (just think of GiveDirectly), the curves for various sorts of animal spending may drop precipitously. Ariel may well be right that, even if so, the returns probably don’t fall off so much that animal work loses to global health work, but I haven’t investigated this myself. The upshot: I have no idea whether there are good ways of spending an additional $100M on animals right now. (That being said, I’d love to see more extensive investigation into field building for animals! If EA field building in general is cost-competitive with other causes, then I’d expect animal field building to look pretty good.)
I should also say that OP’s commitment to worldview diversification complicates any conclusions about what OP should do from its own perspective. Even if it’s true that a straightforward utilitarian analysis would favor spending a lot more on animals, it’s pretty clear that some key stakeholders have deep reservations about straightforward utilitarian analyses. And because worldview diversification doesn’t include a clear procedure for generating a specific allocation, it’s hard to know what people who are committed to worldview diversification should do by their own lights.
Thanks for all this, Nuno. The upshot of Jason’s post on what’s wrong with the “holistic” approach to moral weight assignments, my post about theories of welfare, and my post about the appropriate response to animal-friendly results is something like this: you should basically ignore your priors re: animals’ welfare ranges as they’re probably (a) not really about welfare ranges, (b) uncalibrated, and (c) objectionably biased.
You can see the posts above for material that’s relevant to (b) and (c), but as evidence for (a), notice that your discussion of your prior isn’t about the possible intensities of chickens’ valenced experiences, but about how much you care about those experiences. I’m not criticizing you personally for this; it happens all the time. In EA, the moral weight of X relative to Y is often understood as an all-things-considered assessment of the relative importance of X relative to Y. I don’t think people hear “relative importance” as “how valuable X is relative to Y conditional on a particular theory of value,” which is still more than we offered, but is in the right ballpark. Instead, they hear it as something like “how valuable X is relative to Y,” “the strength of my moral reasons to prioritize X in real-world situations relative to Y,” and “the strength of my concern for X relative to Y” all rolled into one. But if that’s what your prior’s about, then it isn’t particularly relevant to your prior about welfare-ranges-conditional-on-hedonism specifically.
Finally, note that if you do accept that your priors are vulnerable to these kinds of problems, then you either have to abandon or defend them. Otherwise, you don’t have any response to the person who uses the same strategy to explain why they assign very low value to other humans, even if the face of evidence that these humans matter just as much as they do.
This is exactly right, Emre. We are not commenting on the average amount of value or disvalue that any particular kind of individual adds to the world. Instead, we’re trying to estimate how much value different kinds of individuals could add to the world. You then need to go do the hard work of assessing individuals’ actual welfare levels to make tradeoffs. But that’s as it should be. There’s already been a lot of work on welfare assessment; there’s been much less work on how to interpret the significance of those welfare assessments in cross-species decision-making. We’re trying to advance the latter conversation.
Admittedly, we weren’t factoring in the (ostensible) ripple effects, but our modeling indicates that if we’re interested in robust goodness, we should be spending on chickens.
Also, for the reasons that @Ariel Simnegar already notes, even if there are unappreciated benefits of investing in GHD, there would need to be a lot of those benefits to justify not spending on animals. Could work out that way, but I’d like to see the evidence. (When I investigated this myself, making the case seemed quite difficult.)
Thanks for your question, Sabs. Short answer: if (a) you think of your value purely in terms of the amount of welfare you can generate, (b) you think about welfare in terms of the intensities of pleasures and pains, (c) you’re fine with treating pleasures and pains symmetrically and aggregating them accordingly, and (d) you ignore indirect effects of benefitting humans vs. nonhumans, then you’re right about the key takeaway. Of course, you might not want to make those assumptions! So it’s really important to separate what should, in my view, be a fairly plausible empirical hypothesis—that the intensities of many animals’ pleasures and pains are pretty similar to the intensities of humans’ pleasures and pains—from all the philosophical assumptions that allow us to move from that fairly plausible empirical hypothesis to a highly controversial philosophical conclusion about how much you matter.
Hi Monica! We hear you about wanting a table with those results. We’ve tried to provide one here for 11 farmed species: https://forum.effectivealtruism.org/posts/tnSg6o7crcHFLc395/the-welfare-range-table
We tend to think that if the goal is to find a single proxy, something like encephalization quotient might be the best bet. It’s imperfect in various ways, but at least it corrects for differences in body size, which means that it doesn’t discount many animals nearly as aggressively as neuron counts do. (While we don’t have EQs for every species of interest, they’re calculable in principle.)
Finally, we’ve also developed some models to generate values that can be plugged into cost-benefit analyses. We’ll post those in January. Hope they’re useful!
Thanks for all the productive discussion, everyone. A few thoughts.
First, the point of this post is to make a case for the conditional, not for contractualism. So, I’m more worried about “contractualism won’t get you AMF” than I am about “contractualism is false.” I assumed that most readers would be skeptical of this particular moral theory. The goal here isn’t to say, “If contractualism, then AMF—so 100% of resources should go to AMF.” Instead, it’s to say, “If contractualism, then AMF—so if you put any credence behind views of this kind at all, then it probably isn’t the case that 100% of resources should go to x-risk.”
Second, on “contractualism won’t get you AMF,” thanks to Michael for making the move I’d have suggested re: relevance. Another option is to think in terms of either nonideal theory or moral uncertainty, depending on your preferences. Instead of asking, “Of all possible actions, which does contractualism favor?” We can ask: “Of the actual options that a philanthropist takes seriously, which does contractualism favor? It may turn out that, for whatever reason, only high-EV options are in the set of actual options that the philanthropist takes seriously, in which case it doesn’t matter whether a given version of contractualism wouldn’t select all those options to begin with. Then, the question is whether they’re uncertain enough to allow other moral considerations to affect their choice from among the pre-set alternatives.
Finally, on the statistical lives problem for contractualism, I’m mostly inclined to shrug off this issue as bad but not a dealbreaker. This is basically for a meta-theoretic reason. I think of moral theories as attempts to systematize our considered judgments in ways that make them seem principled. Unfortunately, our considered judgments conflict quite deeply. Some people’s response to this is to lean into the process of reflective equilibrium, giving up either principles or judgments in the quest for perfect consistency. My own experience of doing this is that the push for *more* consistency is usually good, whereas the push for *perfect* consistency almost always means that people endorse theories with implications that I find horrifying *that they come to believe are not horrifying,* as they follow from a beautifully consistent theory. I just can’t get myself to believe moral theories that are that revisionary. (I’m reporting here, not arguing.) So, I prefer relying on a range of moral theories, acknowledging the problems with each one, and doing my best to find courses of action that are robustly supported across them. In my view, EAC is based on the compelling thought that we ought to protect the known-to-be-most vulnerable, even at the cost of harm to the group. In light of this, what makes identified lives special is just that we can tell who the vulnerable are. So sure, I feel the force of the thought experiments that people offer to motivate the statistical lives problem; sure, I’m strongly inclined to want to save more lives in those cases. But I’m not so confident to rule out EAC entirely. So, EAC stays in the toolbox as one more resource for moral deliberation.
Thanks, Joshua! We’ll be posting these fairly rapidly. You can expect most of the work before the end of the month and the rest in early November.
Thanks, Matt. As we say, though, we don’t actually think that bees beat salmon. We think that the vertebrates are 0.1 or better of humans, that the vertebrates themselves are within 2x of one another, and that the invertebrates are within 2 OOMs of the vertebrates. We fully recognize that the models are limited by the available data about specific taxa. We aren’t going to fudge the numbers to get more intuitive results, but we definitely don’t recommend using them uncritically.
I hear—and sometimes share—your skepticism about such human/animal tradeoffs. As we argue in a previous post, utilitarianism is indeed to blame for many of these strange results. Still, it could be the best theory around! I’m genuinely unsure what to think here.
Fantastic questions, Lizka! And these images are great. I need to get much better at (literally) illustrating my thinking. I very much appreciate your taking the time!
Here are some replies:
Replacing an M with an N. This is a great observation. Of course, there may not be many real-life cases with the structure you’re describing. However, one possibility is in animal research. Many people think that you ought to use “simpler” animals over “more complex” animals for research purposes—e.g., you ought to experiment on fruit flies over pigs. Suppose that fruit flies have smaller welfare ranges than pigs and that both have symmetrical welfare ranges. Then, if you’re going to do awful things to one or the other, such that each would be at the bottom of their respective welfare range, then it would follow that it’s better to experiment on fruit flies.
Assessing the neutral point. You’re right that this is important. It’s also really hard. However, we’re trying to tackle this problem now. Our strategy is multi-pronged, identifying various lines of evidence that might be relevant. For instance, we’re looking at the Welfare Footprint Data and trying to figure out what it might imply about whether layer hens have net negative lives. We’re looking at when vets recommend euthanasia for dogs and cats and applying those standards to farmed animals. We’re looking at tradeoff thought experiments and some of the survey data they’ve generated. And so on. Early days, but we hope to have something on the Forum about this over the summer.
Symmetry vs. asymmetry. This is another hard problem. In brief, though, we take symmetry to be the default simply because of our uncertainty. Ultimately, it’s a really hard empirical question that requires time we didn’t have. (Anyone want to fund more work on this!?) As we say in the post, though, it’s a relatively minor issue compared to lots of others. Some people probably think that we’re orders of magnitude off in our estimates, whereas symmetry vs. asymmetry will make, at most, a 2x difference to the amount of welfare at stake. That isn’t nothing, but it probably won’t swing the analysis.
The “caged vs. cage-free chicken / carp vs. salmon” examples. This is a great question. We’ve done a lot on this, though none of it’s publicly available yet. Basically, though, you’re correct about the information you’d want. Of course, as your note indicates, we don’t care about natural lifespan; we care about time to slaughter. And while it’s very difficult to know where an animal is in its welfare range, we don’t think it’s in principle inestimable. Basically, if you think that caged hens are living about the worst life a chicken can live, you say that they’re at the bottom end of their welfare range. And if you think cage-free hens have net negative lives, but they’re only about half as badly off as they could be, then can infer that you’re getting a 50% gain relative to chickens’ negative welfare range in the switch from caged to cage-free. And so on. This is all imperfect, but at least it provides a coherent methodology for making these assessments. Moreover, it’s a methodology that forces us to be explicit about disagreements re: the neutral point and the relative welfare levels of animals in different systems, which I regard as a good thing.
Thanks for the kind words about the project, Joel! Thanks too for these thoughtful and gracious comments.
1. I hear you re: the quantitative proxy model. I commissioned the research for that one specially because I thought it would be valuable. However, it was just so difficult to find information. To even begin making the calculations work, we had to semi-arbitrarily fill in a lot of information. Ultimately, we decided that there just wasn’t enough to go on.
2. My question about non-hedonist theories of welfare is always the same: just how much do non-hedonic goods and bads increase humans’ welfare range relative to animals’ welfare ranges? As you know, I think that even if hedonic goods and bads aren’t all of welfare, they’re a lot of it (as we argue here). But suppose you think that non-hedonic goods and bads increase humans’ welfare range 100x over all other animals. In many cost-effectiveness calculations, that would still make corporate campaigns look really good.
3. I appreciate your saying this. I should acknowledge that I’m not above motivated reasoning either, having spent a lot of the last 12 years working on animal-related issues. In my own defense, I’ve often been an animal-friendly critic of pro-animal arguments, so I think I’m reasonably well-placed to do this work. Still, we all need to be aware of our biases.
4. This is a very interesting result; thanks for sharing it. I’ve heard of others reaching the same conclusion, though I haven’t seen their models. If you’re willing, I’d love to see the calculations. But no pressure at all.
Thanks for this question, Richard. You’re right that I don’t focus on positive affective states in the post, though I think most of the arguments would port over. In any case, since the MWP assumes hedonism, the result that chickens and wireheaded humans can realize the same amount of welfare is still pretty significant. Indeed, even the weaker S-Equality Result is significant if your asymmetry hypothesis is correct, as S-Equality would get you most of the way toward (plain old) Equality.
Separately, and as you might guess, I’m skeptical of the view that humans who are maxed out hedonically are still realizing orders of magnitude less welfare than humans who are flourishing by more conventional standards. I think the intuitions that support that view boil down to humans preferring the human way of life—a preference that doesn’t strike me as having much evidential value. But I suppose that’s a conversation for another time!
Thanks for the idea, Pablo. I’ve added summaries to the sequence page.
Thanks for reading, LGS. As I’ve argued elsewhere, utilitarianism probably leads us to say equally uncomfortable things with more modest welfare range estimates. I’m assuming you wouldn’t be much happier if we’d argued that 10 beehives are worth more than a single human. At some point, though, you have to accept a tradeoff like that if you’re committed to impartial welfare aggregation.
For what it’s worth, and assuming that you do give animals some weight in your deliberations, my guess is that we might often agree about what to do, though disagree about why we ought to do it. I’m not hostile to giving intuitions a fair amount of weight in moral reasoning. I just don’t think that our intuitions tell us anything important about how much other animals can suffer or the heights of their pleasures. If I save humans over beehives, it isn’t because I think bees don’t feel anything—or barely feel anything compared to humans. Instead, it’s because I don’t think small harms always aggregate to outweigh large ones, or because I give some weight to partiality, or because I think death is much worse for humans than for bees, or whatever. There are just so many other places to push back.
Fair enough, Henry. We have limited faith in the models too. But as we said:
The numbers are placeholders.
Our actual views are summarized in the key takeaways and again toward the end (e.g., within an order of magnitude of humans for vertebrates--0.1 or above—which certainly does make a practical difference).
This work builds on everything else we’ve done and is not, all on its own, the complete case for relatively animal-friendly welfare range estimates.
Thanks for the kind words, Emre!
Hi Jeff. Thanks for engaging. Three quick notes. (Edit: I see that Peter has made the first already.)
First, and less importantly, our numbers don’t represent the relative value of individuals, but instead the relative possible intensities of valenced states at a single time. If you want the whole animal’s capacity for welfare, you have to adjust for lifespan. When you do that, you’ll end up with lower numbers for animals—though, of course, not OOMs lower.
Second, I should say that, as people who work on animals go, I’m fairly sympathetic to views that most would regard as animal-unfriendly. I wrote a book criticizing arguments for veganism. I’ve got another forthcoming that defends hierarchicalism. I’ve argued for hybrid views in ethics, where different rules apply to humans and animals. Etc. Still, I think that conditional on hedonism it’s hard to get MWs for animals that are super low. It’s easier, though still not easy, on other views of welfare. But if you think that welfare is all that matters, you’re probably going to get pretty animal-friendly numbers. You have to invoke other kinds of reasons to really change the calculus (partiality, rights, whatever).
Third, I’ve been trying to figure out what it would look like to generate MWs for animals that don’t assume welfarism (i.e., the view that welfare is all that matters morally). But then you end up with all the familiar problems of moral uncertainty. I wish I knew how to navigate those, but I don’t. However, I also think it’s sufficiently important to be transparent about human/animal tradeoffs that I should keep trying. So, I’m going to keep mulling it over.