I am a Researcher at Rethink Priorities, working mostly on cross-cause prioritization and worldview investigations. I am passionate about farmed animal welfare, global development, and economic growth/progress studies. Previously, I worked in U.S. budget and tax policy as a policy analyst for the Progressive Policy Institute. I earned a B.S. in Statistics from the University of Chicago, where I volunteered as a co-facilitator for UChicago EA’s Introductory Fellowship.
Laura Duffy
Hi Michael, here are some additional answers to your questions:
1. I roughly calibrated the reasonable risk aversion levels based on my own intuition and using a Twitter poll I did a few months ago: https://x.com/Laura_k_Duffy/status/1696180330997141710?s=20. A significant number (about a third of those who are risk averse) of people would only take the bet to save 1000 lives vs. 10 for certain if the chance of saving 1000 was over 5%. I judged this a reasonable cut-off for the moderate risk aversion level.
4. The reason the hen welfare interventions are much better than the shrimp stunning intervention is that shrimp harvest and slaughter don’t last very long. So, the chronic welfare threats that ammonia concentrations battery cages impose on shrimp and hens, respectively, outweigh the shorter-duration welfare threats of harvest and slaughter.The number of animals for black soldier flies is low, I agree. We are currently using estimates of current populations, and this estimate is probably much lower than population sizes in the future. We’re only somewhat confident in the shrimp and hens estimates, and pretty uncertain about the others. Thus, I think one should feel very much at liberty to plug in different numbers for population sizes for animals like black soldier flies.
More broadly, I think this result is likely a limitation of models based on total population size, versus models that are based more on the number of animals affected per campaign. Ideally, as we gather more information about these types of interventions, we could assess the cost-effectiveness using better estimates of the number of animals affected per campaign.
Thanks for the thorough questions!
Hi Sylvester, thanks for sharing that post, I hadn’t seen it!
Hey, thanks for this detailed reply!
When I said “practical”, I more meant “simple things that people can do without needing to download and work directly with the code for the welfare ranges.” In this sense, I don’t entirely agree that your solution is the most workable of them (assuming independence probably would be). But I agree—pairwise sampling is the best method if you have the access and ability to manipulate the code! (I also think that the perfect correlation you graphed makes the second suggestion probably worse than just assuming perfect independence, so thanks!)
Hi Kyle,
This is a very interesting post! One quick and very small technical detail: Rethink Priorities’ welfare ranges aren’t capped at 1 for non-human animals. (It just happens that, when we adjusted for probability of sentience, they all happened to have 50th percentile estimates that fall below 1). They’re instead a reflection of the difference between the best and worst states that the non-human animal can experience relative to the difference between the best and worst states that a human can experience (which is normalized to 1). In theory, this relative difference could be greater than 1 if the range in intensity of experiences that a non-human animal can experience is wider than that of humans.
In fact, one of our welfare range models (the undiluted experiences mode) that feeds into the aggregate estimates tends to produce sentience-adjusted welfare range estimates greater than 1 under the theory that less cognitively complex organisms may not be able to dampen negative experiences by contextualizing them. As such, a few animals have 95th percentile estimates for their welfare ranges that are above 1 (octopuses, pigs, and shrimp). Here are some more details about the models and distributions: https://docs.google.com/document/d/1xUvMKRkEOJQcc6V7VJqcLLGAJ2SsdZno0jTIUb61D8k/edit?usp=sharing As well as the spreadsheet of results from all models: https://docs.google.com/spreadsheets/d/1SpbrcfmBoC50PTxlizF5HzBIq4p-17m3JduYXZCH2Og/edit?usp=sharing
Again, this is a really thought-provoking and sobering post, thanks for writing it :)
Oh I see! Thanks for the clarification!
This is a really interesting project and way of approaching the topic!
One thing to note: welfare ranges don’t factor in the lifespans of animals, so we’d also need to factor in the typical time a farmed animal lives and then weight by welfare range to get a moral weight-adjusted sense of per calorie animal impacts.
But again, approaching this from a per calorie perspective is really interesting!
Hi Henry! While the 90% confidence intervals for the RP welfare ranges are indeed wide, this is because they’re coming from a mixture of several theories/models of welfare. The uncertainty within a given theory/model of welfare is much lower, and you might have more or less credence in any individual model.
Additionally, if we exclude the neuron count model, the welfare ranges from the mixture of all the other models have narrower distributions.
Here’s a document that explains the different theories/models used: https://docs.google.com/document/d/1xUvMKRkEOJQcc6V7VJqcLLGAJ2SsdZno0jTIUb61D8k/edit
And here’s a spreadsheet with all the confidence intervals from each theory/model individually (after adjusting for probability of sentience): https://docs.google.com/spreadsheets/d/1SpbrcfmBoC50PTxlizF5HzBIq4p-17m3JduYXZCH2Og/edit
Thanks so much, Jakub!
Fascinating, I hadn’t thought about that with respect to Congress. One thing I wonder about with ag-gag laws is whether they run afoul of the First Amendment. Do you know if there’s a strong legal case to be made that they’re unconstitutional?
My gut instinct here would be that it’s probably somewhat harder to pass Congressional legislation that both is constitutional and effectively limits corporate campaigns (because it’s private entities choosing what kinds of products to sell). Am I wrong here? (I am really interested in this topic, so I would love to be corrected)
One consideration that Peter Wildeford made me think of is that, with the initiatives that do fall under Congress’ Interstate Commerce Clause authority, we might expect the longevity to be reduced. For example, if every five years a Congressperson puts into the Farm Bill a proposal to ban states from having Prop 12-style regulations, there’s some chance this passes eventually.
Does your research include any initiatives that do fall under Congressional authority?
Hi Vasco, thanks for the comment! I really appreciate it when people dig into the modelling choices :)
EDIT: I just saw the end of your comment. I’m not aware of any research into the intensity of pain across types, and would be keen to hear from others who are.
I think your ordering (r1 < r2 < r3 and r4 >> 500x annoying) would be totally reasonable, and I haven’t read those posts, so thanks for bringing them up! The choice to use the ratios previously used by Šimčikas was rather arbitrary and meant to be consistent with his results. I get why one might expect excruciating pain to be much worse than 500x annoying pain, and I think we do need more research on this to be able to better aggregate the duration and intensity of pain.
These are the reasons why I allow users to input their own pain weights in the model, so I definitely encourage you and others to try out using alternative weights! (here) (When you enter the weights, 1 is the benchmark for “equivalent to suffering”, and you might want disabling pain to be greater than 1 if down-weighting the significance of hurtful pain relative to disabling.)
Because of such methodological choices, I am more confident in the results about the animal-years improved (which look pretty good).
One thing to note would be that excruciating pain is rather rare across hens’ lifespans, and Welfare Footprint didn’t find statistically significant differences between the amount of excruciating pain experienced by the average hen in conventional cages, enriched cages, and cage-free aviaries.
From the “Total Time in Pain” tab on the display at the bottom of this Welfare Footprint page, the average time a hen spends in excruciating pain in her life, by cage type, is:
- Conventional: 0.05 (0.03 − 0.07) hours/hen—
Furnished/Enriched: 0.038 (0.018 − 0.058) hours/hen
- Cage-free: 0.04 (0.02 − 0.06) hours/hen
Due to the lack of statistically significant differences between the time spent in excruciating pain, it may be that changing the weights drastically wouldn’t lead to discernible/actionable differences in the results
However, I think that if we weighted the difference between Hurtful --> Disabling pain (r2) higher than the Annoying --> Hurtful difference we would get meaningfully different results on suffering reduction. (As we would if we chose a different benchmark category for the definition of “suffering”.)
Again, I encourage you and others to try this out—I hope the model is useful and accessible to lots of people. Thanks again for the comment and feedback
Thanks for the comment, Ben!
And thanks so much to everyone doing direct work to improve animal welfare!
I’ll also note that I think the counterfactual impact period was one of the model decisions I struggled the most with, which is why you can change it in the model here and see how the results change! https://my.causal.app/models/165404?token=6e8998626d0643db9c86482475aecc2c
Hi Michael, thanks for the comments!
I’ll take the second one first: thanks for bringing to my attention the two-envelopes problem. I’ll look more into this, and I’ll revise accordingly!
As for the years of counterfactual impact, I wanted this report to err on the side of being too conservative, because the impact still appears to be pretty large even under this conservative assumption.
A couple of reasons as to why I used four years include:
1. It’s still relatively consistent with other organizations’ assumptions of years of counterfactual impact. Šimčikas 2019 also gives an overview of other cost-effectiveness analyses’ counterfactual impact assumptions, which I quote below:“Bollard (2016) assumes that cage-free commitments accelerate changes by five years. He also adds: “In my view, the assumption that these campaigns only accelerated pledges by five years is very conservative. It seems equally likely that these companies would never have dropped battery cages, or would have merely transitioned to “enriched” cages. For instance, as recently as March 2015, a coalition backed by McDonald’s, General Mills, and other major food companies issued a report which largely endorsed “enriched” cages as an alternative to cage-free systems.
“ACE uses a subjective 90% confidence interval of 1.6 to 14 years (mean 5.6 years) for all corporate pledges. They explain that “This is the number of years for which we expect these commitments to have an effect for. It is primarily based on counterfactual reasoning—how long before another factor, such as a legislative change or a shift in consumer demand, leads to a similar result.”
“Capriati (2018) estimate does not have a direct equivalent to years of impact expected. Instead, it estimates the number of years THL moves the policy forward by. It assigns the value to this variable based on how important THL’s role was in bringing policies about. By analyzing six randomly selected campaigns, it concludes that on average, THL’s cage-free and broiler campaigns moved policies forward by one year. Note that this assumes that other organisations would have still done corporate campaigns.”
2. Given these estimates, four years seemed like an appropriate lower bound that also aligns with US political cycle.
3. I think, as you and zdgroff have pointed out, there are probably good reasons as to why the counterfactual impact period would be longer than that of corporate campaigns. I really look forward to reading this research! But I also wanted to maintain a degree of conservatism.
So perhaps one’s takeaway is: “this is a good lower bound on the cost-effectiveness of ballot initiatives and, if designed well, they can still look pretty competitive with corporate campaigns nonetheless.”
Again, thanks for the comment!
Hi Lizka! Thanks for the good summary of the ballot initiatives selection process.
Regarding the second question, I think you’re right it would be hard to estimate the probability of similar initiatives passing in other states, as well as the costs of doing so. Here are a few thoughts:
1. One reason we might be optimistic about the cost-effectiveness of pursuing ballot initiatives in more states is that the campaigns in California, Massachusetts, and Arizona may have done much of the heavy lifting in terms of proving to the public that these initiatives are feasible. Advocates also may have refined their techniques to be more effective, and the publicity they got (Prop 12 especially) may have made people in other states more willing to vote for enhanced welfare requirements.
2. But it also might be harder to pass these initiatives in states other than California and Massachusetts for various reasons (they’re very liberal, for example). Nevertheless, one study from 2014 models which states could pass initiatives similar to California Proposition 2 (which applied to domestic production only). Here’s a summary of their findings from my report (pg. 115):
“One study from 2014 used demographic data to model the vote share that a hypothetical initiative designed like California Proposition 2 would receive in all states. Amongst the states that allow ballot initiatives, Proposition 2 is predicted to gain above 50% of the votes in several of them. Depending on the model, these potential states could include Washington, Nevada, Michigan, Oregon, and Colorado, amongst others (Smithson et. al. 2014, pp. 120, 122). Though a few of these states have already passed legislation to implement some farmed animal welfare standards on the state level (Smithson et. al. 2014, pp. 122), and though the study only estimated the likelihood of passing initiatives that affect domestic animals, it seems plausible that initiatives impacting all goods sold in-state could pass in more states than just California and Massachusetts.
In the end, we really do not yet know if the cost-effectiveness of ballot initiatives–especially ones modeled after California Proposition 12 and Massachusetts Question 3–generalizes to states with political ideologies, wealth, and other demographics that differ from California and Massachusetts (which are themselves outliers).”
In all, I think this is a great question to be asking, and there are some reasons to be cautiously hopeful that ballot initiatives could be successful in states other than those studied, namely California and Massachusetts. In addition, I would suspect there is a lot of room for advocacy in these two states as well with regard to broiler chicken welfare.
To follow up on Bob’s point, the ranges presented here are from a mixture model which combines the results from several models individually. You can see the results for each model here: https://docs.google.com/spreadsheets/d/1SpbrcfmBoC50PTxlizF5HzBIq4p-17m3JduYXZCH2Og/edit?usp=sharing
For example, the 0.005 arises because we are including the neuron count model of welfare ranges in our overall estimates. If you don’t include this model (as there are good reasons not to, see https://forum.effectivealtruism.org/posts/Mfq7KxQRvkeLnJvoB/why-neuron-counts-shouldn-t-be-used-as-proxies-for-moral) then the 5th percentile welfare range for pigs of all models combined is 0.20.
The 1.031 comes from a model called the “Undiluted Experiences” model, which suggests that animals with lower cognitive abilities have greater welfare ranges because they are not as able to rationalize their feelings (eg. pets being anxious when you’re packing for a trip). A somewhat different model would be the “Higher-Lower Pleasures” model that is built on the idea that higher cognitive capacities means you can experience more welfare (akin to the JS Mill idea of higher-order pleasures). Under this model, we estimate that the range for pigs is 0.23 to 0.49--which is quite significant given how this model could be seen as having a pro-human bias!
In sum, the welfare ranges presented above reflect our high degree of uncertainty surrounding how to think about measuring welfare. As such, we invite you to take a closer look at each model (you’ll find most of them converge on the overall conclusion that vertebrates are within an order of magnitude of humans in terms of their welfare ranges).
Seconding this question, and wanted to ask more broadly:
A big component/assumption of the example given is that we can “re-run” simulations of the world in which different combinations of actors were present to contribute, but this seems hard in practice. Do you know of any examples where Shapley values have been used in the “real world” and how they’ve tackled this question of how to evaluate counterfactual worlds?
(Also, great post! I’ve been meaning to learn about Shapley values for a while, and this intuitive example has proven very helpful!)