I’m a Senior Researcher for Rethink Priorities, a Professor of Philosophy at Texas State University, a Director of the Animal Welfare Economics Working Group, the Treasurer for the Insect Welfare Research Society, and the President of the Arthropoda Foundation. I work on a wide range of theoretical and applied issues related to animal welfare. You can reach me here.
Bob Fischer
Thanks so much for this, Jakob. Really great questions. On the application part, let me first quote something I wrote to MSJ below:
I was holding the standard EA interventions fixed, but I agree that, given contractualism, there’s a case to be made for other priorities. Minimally, we’d need to evaluate our opportunities in these and similar areas. It would be a bit surprising if EA had landed on the ideal portfolio for an aim it hasn’t had in mind: namely, minimizing relevant strength-weighted complaints.
That being said, a lot depends here on the factors that influence claim strength. Averting even a relatively low probability of death can trump lots of other possible benefits. And cost matters for claim strength too: all else equal, people have weaker claims to large amounts of our resources than they do to small amounts. So, yes, it could definitely work out that, given contractualism, EA has the wrong priorities even within the global health space, but isofar as some popular interventions are focused on inexpensive ways of saving lives, we’ve got at least a few considerations that strongly support those interventions. That being said, we can’t really know unless we run the numbers.
Re: the statistical lives problem for the ex ante view, I have a few things to say—which, to be clear, don’t amount to a direct reply of the form, “Here’s why the view doesn’t face the problem.” First, every view has horrible problems. When it comes to moral theory, we’re in a “pick your poison” situation. There are certainly some views I’m willing to write off as “clearly false,” but I wouldn’t say that of most versions of contractualism. In general, my approach to applied ethics is to say, “Moral theory is brutally hard and often the best we can do is try to assess whether we end up in roughly the same spot practically regardless of where we start theoretically.” Second, and in the same spirit, my main goal here is to complement Emma Curran’s work: she’s already defended the same conclusion for the ex post version of the view. So, it’s progress enough to show that, whichever way you go, you get something other than prioritizing x-risk. Third, the ex ante view doesn’t imply that we should prioritize one identified person over any number of “statistical” people unless all else is equal—and all else often isn’t equal. I grant that there are going to be lots of cases where identified lives trump statistical lives, but for the kinds of reasons I mentioned when thinking about your great application question, we still need to sort out the details re: claim strength.
Really appreciate the very helpful engagement!
This is helpful, Michael. I was holding the standard EA interventions fixed, but I agree that, given contractualism, there’s a case to be made for other priorities. Minimally, we’d need to evaluate our opportunities in these and similar areas. It would be a bit surprising if EA had landed on the ideal portfolio for an aim it hasn’t had in mind: namely, minimizing relevant strength-weighted complaints.
Thanks for this, Michael. You’re right that if people could be kept alive a lot longer (and, perhaps, made to suffer more intensely than they once could as well), this could change the stakes. It will then come down to the probability you assign to a malicious AI’s inflicting this situation on people. If you thought it was likely enough (and I’m unsure what that threshold is), it could just straightforwardly follow that s-risk work beats all else. And perhaps there are folks in the community who think the likelihood is sufficiently high. If so, then what we’ve drafted here certainly shouldn’t sway them away from focusing on s-risk.
Thanks for the question, John. I’m not sure how much weight to put on “similar” in your question. In general, you’d be looking to minimize the greatest strength-weighted complaint that someone might have. Imagine a simple case where all the individuals in two equally-sized populations you might help are at risk of dying, which means that the core content of the complaint would be the same. Then, we just have the strength-weighting to worry about. The two key parts of that (at least for present purposes) would be the probability of harm, your probability of impact, and the magnitude of the impact you can have. So, we multiply through to figure out who has the strongest claim. In a case like this, intervention prioritization looks very similar to what we already do in EA. However, in cases where the core contents of the complaints are different (death vs. quality of life improvements, say), the probabilities might not end up mattering. Or in cases where your action would have high EV but only because you’re aggregating over a very large population where each individual has a very low chance of harm, it could easily work out that, according to EAC, you should get less EV by benefitting individuals who are exposed to much greater risk of harm. So the core process can sometimes be similar, but with these anti-aggregative (or partially-aggregative) side constraints.
Thanks for this, Aaron. Fair point. A more accurate title would be something like: “If Scanlonian contractualism is true, then between Emma Curran’s work on the ex post version of the view and this post’s focus on the ex ante version, it’s probably true that when we have duties to aid distant strangers, we ought to discharge them by investing in high impact, high confidence interventions like AMF.”
Fair enough re: the view that contractualism is just one part of morality. I suppose that the contractualist has two obvious maneuvers here. One of them is to reject this assumption and take what we owe one another to be all of morality. Another is to say that what we owe one another is sensitive to the rest of morality and, for that reason, it’s appropriate to have what we owe one another trump other moral considerations in our practical deliberations. Either way, if we owe it to the global poor to prioritize their interests, it’s what we ought to do all things considered.
FWIW, given my own uncertainties about normative theory, I care more about the titular conditional (If contractualism, then AMF) than anything else here.
Ah, I see. Yeah, we discuss this explicitly in Section 2. The language in the executive summary is a simplification.
Fair point about it being a broad family of theories, Zach. What’s the claim that you take Scanlonian contractualism not to entail? The bit about not comparing the individual’s claim to aid to the group’s? Or the bit about who you should help?
Nice to hear from you, Michael. No, we don’t provide a theory of moral uncertainty. We have thoughts, but this initial sequence doesn’t include them. Looking forward to your draft whenever it’s ready.
Thanks, Joshua! We’ll be posting these fairly rapidly. You can expect most of the work before the end of the month and the rest in early November.
Thanks for engaging, Jack! As you’d expect, we can’t tackle everything in a single sequence; so, you won’t get our answers to all your questions here. We say a bit more about the philosophical issues associated with going beyond EVM in this supplementary document, but since our main goal is to explore the implications of alternatives to EVM, we’re largely content to motivate those alternatives without arguing for them at length.
Re: GHD work and cluelessness, I hear the worry. We’d like to think about this more ourselves. Here’s hoping we’re able to do some work on it in the future.
Re: not all x-risk being the same, fair point. We largely focus on extinction risk and do try to flag as much in each report.
We don’t, I’m sorry to say. The numbers would be comparable to pigs, but because cows are farmed in such low numbers by comparison, we didn’t prioritize them. I know we need to extend the analysis, given how many people have asked about cattle!
Hi Jeff. Thanks for engaging. Three quick notes. (Edit: I see that Peter has made the first already.)
First, and less importantly, our numbers don’t represent the relative value of individuals, but instead the relative possible intensities of valenced states at a single time. If you want the whole animal’s capacity for welfare, you have to adjust for lifespan. When you do that, you’ll end up with lower numbers for animals—though, of course, not OOMs lower.
Second, I should say that, as people who work on animals go, I’m fairly sympathetic to views that most would regard as animal-unfriendly. I wrote a book criticizing arguments for veganism. I’ve got another forthcoming that defends hierarchicalism. I’ve argued for hybrid views in ethics, where different rules apply to humans and animals. Etc. Still, I think that conditional on hedonism it’s hard to get MWs for animals that are super low. It’s easier, though still not easy, on other views of welfare. But if you think that welfare is all that matters, you’re probably going to get pretty animal-friendly numbers. You have to invoke other kinds of reasons to really change the calculus (partiality, rights, whatever).
Third, I’ve been trying to figure out what it would look like to generate MWs for animals that don’t assume welfarism (i.e., the view that welfare is all that matters morally). But then you end up with all the familiar problems of moral uncertainty. I wish I knew how to navigate those, but I don’t. However, I also think it’s sufficiently important to be transparent about human/animal tradeoffs that I should keep trying. So, I’m going to keep mulling it over.
Thanks a bunch for your question, Matt. I can speak to the philosophical side of this; Laura has some practical comments below. I do think you’re right that—and in fact our team discussed the possibility that—we ought to be treating the welfare range estimates as correlated variables. However, we weren’t totally sure that that’s the best way forward, as it may treat the models with more deference than makes sense.
Here’s the rough thought. We need to distinguish between (a) philosophical theories about the relationship between the proxies and welfare ranges and (b) models that attempt to express the relationship between proxies and welfare range estimates. We assume that there’s some correct theory about the relationship between the proxies and welfare ranges, but while there might be a best model for expressing the relationship between proxies and welfare range estimates, we definitely don’t assume that we’ve found it. In part, this is because of ordinary points about uncertainty. Additionally, it’s because the philosophical theories underdetermine the models: lots of models are compatible with any given philosophical theory; so, we just had to choose representative possibilities. (The 1-point-per-proxy and aggregation-by-addition approaches, for instance, are basically justified by appeal to simplicity and ignorance. But, of course, the philosophical theory behind them is compatible with many other scoring and aggregation methods.) So, there’s a worry that if we set things up the way you’re describing, we’re treating the models as though they were the philosophical theories, whereas it might make more sense not to do that and then make other adjustments for practical purposes in specific decision contexts if we’re worried about this.Laura’s practical notes on this:
A change like the one you’re suggesting would likely decrease the variance in the estimates of f(), since if you assume the welfare ranges are independent variables, you’d get samples where the undiluted experiences model is dominating the welfare range for, say, shrimp, and the neuron count model is dominating the welfare range for pigs. I suggest a quick practical way of dealing with this would be to cut off values of f() below the 2.5th percentile and 97.5th percentile.
Or, even better, I suggest sorting the welfare ranges from least to greatest, then using pairs of the ith-indexed welfare ranges for the ith estimate of f(). Since each welfare model is given the same weight, I predict this’ll most accurately match up welfare range values from the same welfare model. (e.g. the first 11% will be neuron count welfare ranges, etc.)
Ultimately, however, given all the uncertainty in whether our models are accurately tracking reality, it might not be advisable to reduce the variance as such.
Short version: strongly agree with you about the importance of shifting the conversation from sentience to welfare ranges, but I think that the issue is basically intractable given hedonism at this juncture, as we have no reason to think that any of the states that could be mental states in AI systems are type identical to any of the states in biological organisms. It isn’t intractable given other theories of welfare, though, and depending on your views about what moral weights represent, a “moral weight” for AI systems might still be available. However, we’d need a different methodology for that than the one we outline here.
Thanks for all this, Michael. Lots to say here, but I think the key point is that we don’t place much weight on these particular numbers and, as you well know and have capably demonstrated, we could get different numbers (and ordinal rankings) with various small changes to the methodology. The main point to keep in mind (which I say not for your sake, but for others, as I know you realize this) is that we’d probably get even smaller differences between welfare ranges with many of those changes. One of the main reasons we get large differences between humans and many invertebrates is because of the sheer number of proxies and the focus on cognitive proxies. There’s an argument to be given for that move, but it doesn’t matter here. The point is just that if we were to focus on the hedonic proxies you mention, there would be smaller differences—and it would be more plausible that those would be narrowed further by further research.
If I had more time, I would love to build even more models to aggregate various sets of proxies. But only so many hours in the day!
Great question, Michael. It’s probably fine to use the silkworm estimates for this purpose.
Let me renew my offer to talk. DM me for my Calendly link.
Sorry for the slow reply, Vasco. Here are the means you requested. My vote is that if people are looking for placeholder moral weights, they should use our 50th-pct numbers, but I don’t have very strong feelings on that. And I know you know this, but I do want to stress for any other readers that these numbers are not “moral weights” as that term is often used in EA. Many EAs want one number per species that captures the overall strength of their moral reason to help members of that species relative to all others, accounting for moral uncertainty and a million other things. We aren’t offering that. The right interpretation of these numbers is given in the main post as well as in our Intro to the MWP.
Thanks for your question, Eli. The contractualist can say that it would be callous, uncaring, indecent, or invoke any number of other virtue theoretic notions to explain why you shouldn’t leave broken glass bottles in the woods. What they can’t say is that, in some situation where (a) there’s a tradeoff between some present person’s weighty interests and the 20-years-from-now young child’s interests and (b) addressing the present person’s weighty interests requires leaving the broken glass bottles, the 20-years-from-now young child could reasonably reject a principle that exposed them to risk instead of the present person’s. Upshot: they can condemn the action in any realistic scenario.