I understand, thank you!
Jacob_Peacock
Thank you for this resource, Evan! Several pieces I haven’t seen before here.
Thank you for your thoughtful questions, Aaron!
1. I don’t know of any existing funding opportunities, although I’d say more research with non-self-reported dietary outcomes would be worthwhile and volunteer researchers with the appropriate skills could certainly be involved there. Volunteers with connections at colleges, universities, restaurants or grocery stores could also be valuable for building collaborations. There may also be as-of-yet undiscovered allies in advocating for transparency in the food system, perhaps among groups fighting obesity or generally supporting public health.
2. I did some research on personal food tracking, specifically food diaries where people track their consumption. I think reactivity is the most significant problem: keeping a food diary in itself has been demonstrated as an effective weight loss strategy (p6 Thompson and Subar 2013). That said, keeping a food diary could be interesting to explore as an intervention of its own right. For measurement, however, there has been less validation work on food diaries, likely because they are so onerus to participants, causing noncompliance and dropout. Using an existing population tracking their diets would be prone to selection bias since participants are likely already health conscious. Recording photos of food, rather than written diaries, is also being explored and may mitigate reactivity by requiring less work from participants, although subsequently analyzing the photos may prove challenging. (“Pledging a meat-free month: An experience sampling study with smartphones” https://researchfund.animalcharityevaluators.org/funded-projects/)
Thank you, Aaron! I think your observation that animal product consumption differs systematically between restaurants, grocery stores and other venues is likely accurate. This study mitigated the problem by selecting for campuses where most of the food purchased can be tracked via the dining services, thus providing a more complete picture of individual diets. Of course, these diets may not be representative of the general population but at least a more complete picture of individual diet reduces selection biases between food venues. That said, we didn’t find many campuses that met those selection criteria, so future field research will likely need to consider the limitation of sampling only a possibly biased portion of diet.
Thank you for writing this, Tom. I’ve split my comments in two, with another on the larger issue of individual v institutional interventions. Here I’ll focus on the particulars of cage-free commitments as a case study.
First, I’m not sure I follow the “Institutions are likely aiming for “good enough.”“ section: if an improvement in animal welfare is profitable, it should presumably happen without any advocacy. But I’m not sure it then follows that “the pressure of public opinion is needed to drive welfare beyond “good enough””.
Second, most corporate cage-free commitments don’t rely on consumer “willingness to pay” for cage-free eggs per se. Instead of asking people to directly choose cage-free over caged eggs, entire restaurant chains and states provide exclusively cage-free eggs, so the choice would have to be made at the restaurant or state level. It’s possible people would then choose not to buy certain products or patronize particular restaurants, but given the low price elasticity of eggs I wouldn’t expect this to be a large effect; it seems even less likely that people in Los Angeles or San Francisco would drive hours to obtain modestly cheaper eggs. I think this interpretation that corporate campaigns wins have been caused by the attitudes, intentions and behaviors of the public is likely where we disagree most. I agree there is likely some requisite threshold of public support, but it’s not clear this public support then causes change.
Third, I agree with the concern about holdouts. However, this is where the power of legislation, and, unless that holdout is a producer, secondary targeting can come in to play.
Fourth, at least where the industry has actually transitioned to cage-free (about 20% currently), I’m less concerned about “institutional backslide” as this would again require large capital investments to reinstall cages.
Fifth, I think we should not take the industry’s claims (like “In response to initially weak demand for cage-free eggs...”) at face value, and there are many possible reasons a particular producer might not remove cages. (The article offers some other explanations, like the wide price spread resulting from the post-avian influenza egg glut. But we should also consider more mundane possibilities like Rose Acres creditors weren’t willing to lend them more money—it certainly looks better to say there was insufficient consumer demand!) Since ~2015 the increase in share of cage-free eggs in the US market has been almost linear. So if consumer demand, rather than retailer/restaurant/first receiver, has been shifting conversions to cage-free, it’s not been apparent in overall production.
Lastly, I’m not sure which particular outcomes you’re referring to with gestation crate commitments, but the details of the pork industry and the commitments create some dissimilarities with the cage-free commitments.
I don’t think anyone is suggesting shifting “most of the resources” (Jacy only suggests 50% of existing individual resources) and certainly not all resources, so I don’t think the “and” not “or” message is really relevant. Of course, if there are people who hold this view I’d be curious to learn more. I think the question for most is identifying the optimal ratio. Sentience Institute is also quite clear on the evidence underlying that belief and it’s not a lack of a link between advocacy and diet, but the existence of more public support for institutional change, historical precedent and psychological arguments. For what it’s worth I also believe we need some balance between the two, as Harish Sethu wrote about at Labs.
I appreciate your pointing out that many institutional approaches have “even less real-world empirical support.” As you know, The Humane League Labs is actively working to rectify this :) However, one might object that in the history of social change, institutional approaches have significantly more empirical support, at least in so far as few if any social movements seem to have succeeded by convincing each individual to their cause. I think there is still work to be done marshalling this evidence and I’m not convinced the existing evidence is applicable to animal advocacy. But I nonetheless find this evidence (the history of social change) more convincing than not in support of institutional approaches (maybe 60% credence).
I am also concerned that it’s not clear what individual efforts you’re suggesting. At least in the US, attitudes and intentions towards animals seem quite good:
Animal Tracker finds 68% of US adults “Strongly” or “Somewhat support” “the animal protection movement’s goal to minimize and eventually eliminate all forms of animal cruelty and suffering.”
Gallup finds 32% of people agree that “Animals deserve the exact same rights as people to be free from harm and exploitation”, while another 62% agree “Animals deserve some protection from harm and exploitation, but it is still appropriate to use them for the benefit of humans”. Separately, 64% support “passing strict laws concerning the treatment of farm animals”.
Sentience Institute finds 54% of people agree with “I am currently trying to consume fewer animal-based foods (meat, dairy, and/or eggs) and more plant-based foods (fruits, grains, beans, and/or vegetables).” and 49% agree with “I support a ban on the factory farming of animals.”
These numbers could certainly be higher, but overall they seem decent and have been at least somewhat validated with the success of state-level legislation passed at the poll. Of course, the attitude-behavior and citizen-consumer gaps mean attitudes and intentions don’t necessarily translate to other behavior changes, like reduced animal product consumption or choosing cage-free over caged eggs. But to me, this suggests we need to focus on measuring and changing the behaviors we’re interested in, rather than the attitudes and intentions that don’t translate.
More broadly, I agree with a commenter elsewhere that this discussion seems to focus primarily on corporate animal welfare campaigns when there are many other institutional avenues to consider.
(Lastly, at the risk of nit picking on a very minor note in the intro, but social sciences like economics and political science often consider institutions, so I’m not sure social science necessarily inclines one toward the individual.)
The Brooks Animal Law Digest is a good new resource. (Also, I noticed the version of this article on RP’s website suggest leaving a comment, but a comment field is not available there.) Thanks for putting this all together, Saulius!
Hi Jason, thank you for writing this. I appreciate the refreshing reiteration that we do and must make trade-offs between the interests of different species, as well as your careful philosophical treatment. A few thoughts:
An animal’s capacity for welfare is how good or bad its life can go. An animal’s moral status is the degree to which an animal’s experiences or interests matter morally.
While capacity and moral weight are important parameters, I think there also remains significant empirical uncertainty about actual experience as well. Without eliminating this uncertainty, estimate of the two former values may not be especially useful.
(1) a holistic approach, in which relevant experts employ their normative and biological expertise to make all-things-considered estimates of the appropriate tradeoffs between different lives, experiences, or interests, and (2) an atomistic approach, in which we identify empirical proxies for morally salient features, then let our best scientific understanding of the degree to which different animals possess those features guide our estimates of comparative moral value. The two approaches are not in principle mutually exclusive.
As you indicate, these are, of course, not mutually exclusive. However, I suspect they overlap so much as to be not worth distinguishing as any reasonable application would apply both approaches. As you suggest, the weightings of the atomistic features would rely on expert judgement, as would estimates of combination effects, which could occur at the species (or even individual) level. For example, Bracke 2019 is the best study I’ve seen on comparing a wide array of chicken housing condition. In the study, a panel of chicken welfare experts were provided a set of “atomistic” attributes (eg, stocking density, temperature, light exposure) about different housing conditions to inform holistic judgments of the relative welfare of each system. While this is not exactly the same task as assessing capacity for welfare and moral status, it seems analogous and illustrative of the need for a hybrid approach.
So I think there is good reason in general to worry that unwanted considerations unduly sway one’s intuitions about the value of nonhuman animals.
I agree, but this might be mitigated by including these as explanatory variables. For example, the impact of speciesism could at least be examined and potentially controlled for by inclusion of the above-cited speciesism scale or the impact of diet patterns by inclusion of a diet screener.
Personally, I think order is probably the right rank at which to investigate the subject.
This seems very unlikely to be the correct taxa in my opinion. First, taxa above genus or family are generally arbitrary in scope. Second, relevant traits would likely be heterogeneous within such a broad group. For example, within the order of bivalves, there are sessile and motile species, and species with a dozen plus compound eyes or “eyes” that detect only light and dark.
Thanks for the helpful clarifications and responses, Jason. I don’t have anything to add at this point, but look forward to reading more of your work!
Hi Jamie, I’m glad to see this work out and will look forward to reading it in more depth. Congratulations—I’m sure it was hugely labor intensive! In my quick read, I was confused by this point:
Weaknesses of the health behavior literature, despite decades of research and huge amounts of funding, suggest serious limitations of experimental and observational research in other contexts, such as the farmed animal movement.
I think this is too pessimistic and somewhat short-term thinking. Instead, I would explain the weakness of the current health behavior literature by a few factors:
Foremost, I think this is a symptom of the extraordinary difficulty of empirical research. It’s simply hard to do high-quality research and we are still very much actively discovering what it means to do high-quality research.
Decades just aren’t that long of a time to spend on a research subject, especially in light of the first point. Many contemporary research questions have been known and unanswered for millenia. For example, we have been studying how to extend human life, largely without success, since ancient times.
Various cultural factors in academia inhibit the conduct of high-quality studies. As a few examples: funders sometimes simply won’t cut a check big enough to fund a single high-quality study, but several smaller lower quality ones; some subfields have simply accepted low-quality study designs as a fact of life and made only modest efforts to improve them; a publish or perish mentality incentives producing many small studies on diverse topics, rather than one high-quality study; and highly powered studies are more likely to return a null result, thus damaging publication prospects.
Of course, none of these are easy to surmount, but I don’t see reason to give up on trying to conduct high-quality studies, especially with few alternatives available. Which brings me to my second question:
This makes other types of evidence, such as social movement case studies, relatively more promising.
To my (limited) understanding, case studies are by and large a type of observational research, since they rely on analyzing the observed outcomes of, for example, a social movement, without intervention. It seems like social movement case studies are then limited generally, like most observational research, to understanding correlations and motivating causal theories about those correlations, rather than measuring causation itself. Furthermore, case studies are usually regarded as low-quality evidence and form the base of the evidence pyramid in epidemiology. As such, I’m not sure how the difficulty of collecting high-quality evidence then implies we should collect more of what is usually regarded as low-quality evidence.
This also seems like a rather broad proclamation about the usefulness of experimental and observational studies—have you considered the merits of regression discontinuity designs, instrumental variables estimation, propensity score matching and prospective cohort studies, for example? All of these seem like designs worth considering for EAA research but don’t seem broadly explored either here or in Sentience Institute’s foundational question “EAA RCTs v intuition/speculation/anecdotes v case studies v external findings”.
To clarify, I suspect we have some agreement on (social movement) case studies: I do think they can provide evidence towards causation—literally that one should update their subjective Bayesian beliefs about causation based on social movement case studies. However, at least to my understanding of the current methods, they cannot provide causal identification, thus vastly limiting the magnitude of that update. (In my mind, to probably <10%.)
What I’m struggling to understand fundamentally is your conception of the quality of evidence. If you find the quality of evidence of the health behavior literature low, how does that compare to the quality of evidence of SI’s social movement case studies? One intuition pump might be that the health behavior literature undoubtedly contains scores of cross-sectional studies, which themselves could be construed as each containing hundreds of case studies, and these cross-sectional studies are still regarded as much weaker evidence than the scores of RCTs in the health behavior literature. So where then must a single case study lie?
For what it’s worth, in reflecting on an update which is fundamentally about how to make causal inferences, it seems like being unfamiliar with common tools for causal inference (eg, instrumental variables) warrants updating towards an uninformed prior. I’m not sure if they’ll restore your confidence, but I’d be interested to hear.
Thank you for your replies, Jamie, I appreciate the discussion. As a last point of clarification when you say ~40%, does this, for example, mean that if a priori I was uninformed on momentum v complacency and so put 50/50% credence on either possibility, that a series of case studies might potentially update you to 90/10%?
When I’m thinking about the value of social movement case studies compared to RCTs, I’m also thinking about their ability to provide evidence on the questions that I think are most important
I don’t disagree—but my point with this intuition pump is the strength of inference a case study, or even series of case studies, might provide on any one of those questions.
Ah, I see—in that case, it makes a lot of sense for you to pursue these case studies. I appreciate the time you invested to get to a double crux here, thanks!
Thank you taking the time to engage, much appreciated! Forgive my responding quickly and feel free to ask for clarification if I miss anything:
Definitely, could be different results with different docs. But ours showed a much stronger effect than the average of similar interventions we found in a previous meta-analysis, suggesting Good for Us is pretty good. It is probably better than Cowspiracy on changing intentions, with longer studies of excerpts of Cowspiracy also finding no effect.
Agree especially with your sub-point. We also tried to recruit populations more likely to be effected in Study 3. Also, see sources in my previous point.
Maybe but doesn’t seem likely since there wasn’t change in importance of animal welfare or other measures of attitudes. I would generally expect effects to decay over time rather than get stronger; our meta-analysis (weakly) supports this hypothesis in that longer time points showed smaller effects. Usefulness of a 2-3 month time point would mostly depend on attrition in my opinion.
I would vote other interventions. Classroom education in colleges and universities seems good as does increasing the availability of plant-based options in food service and restaurants.
Yes, we did and found no meaningful increases in interest in animal activism, including voting intentions. Full questions available in in the supplementary materials.
Thanks for these Peter! (Note that Peter and I both work at Rethink Priorities.)
Do you think your study is sufficiently well powered to detect very small effect sizes on meat consumption?
No, and this is by design as you point out. We did try to recruit a population that may be more predisposed to change in Study 3 and looked at even more predisposed subgroups.
substantially larger than the effects we usually find for animal interventions even on more moveable things
I think we were informed by the results of our meta-analysis, which generally found effects around this size for meat reduction interventions.
Their null result on effect on meat consumption was not at all tightly bounded: −0.3oz [-6.12oz to + 5.46oz]
Obviously, this is ultimately subjective, but this corresponds to plus or minus a burger per week, which seems reasonably precise to me. The standardized CI is [−0.17, 0.15], so bounded below a ‘small effect’. And, as David points out, less stringent CIs would look even better. But to be clear, I don’t have a substantive disagreement here—just a matter of interpretation.
For even more power, we could combine studies 1 & 3 in a meta-analysis (doubling the sample size). Study 3 found a treatment effect of−1.72 oz/week; 95% CI: [−8.84,5.41], so the meta-analytic estimate would probably be very small but still in the correct direction, with tighter bounds of course.
explained just by the fact that you could find effects on the moveable attitudes
Just to clarify, we measured attitudes in all 3 studies. We found an effect on intentions in Study 2 where there wasn’t blinding and follow-up was immediate. Studies 3 & 4 (likely) didn’t find effects on attitudes.
I’d be curious to estimate what effect size would we be looking at if say 3-5% of people stopped eating meat (an optimistic estimate IMO).
Just roughly taking David Reinstein’s number of 80 oz per week (could use our control group’s mean for a better estimate) and assuming no other changes, 1% abstention would give a 0.8 oz effect size and 5% 4 oz. So definitely under-powered for the low end, but potentially closer to detectable at the high end. (And keeping in mind this is at 12-day follow-up; we should expect that 1% to dwindle further at longer follow-up. With figures this low I would be pessimistic for the overall impact. But keep in mind other successful meat reduction interventions don’t seem to have worked mostly through a few individuals totally abstaining!)
corresponds to what a t-test is assessing
I wouldn’t expect issues in testing the difference in means given our samples sizes. But otherwise not sure what you’re suggesting here.
I agree, pricing in impact seems reasonable. But do you think this is currently happening? if so, by what mechanism? I think the discrepancies between Redwood and ACE salaries are much more likely explained by norms at the respective orgs and funding constraints rather than some explicit pricing of impact.
I don’t find the case against bivalve sentience that strong, especially for the number of animals potentially involved and the diversity of the 10k bivalve species. (For example, scallops are motile and have hundreds of image-forming eyes—it’d be surprising to me if pain wasn’t useful to such a lifestyle!)
I think there are two additional sources on corporate animal welfare campaigns worth mention here; neither cover all the topics you outline in tractability, but I think do fill in some of the blanks:
Peter Singer’s Ethics Into Action: Henry Spira and the Animal Rights Movement gives a book-length overview of corporate campaigns from one of their key US proponents.
Samara Mendez and my study Impact of Corporate Commitments to Source Cage-Free Eggs on Layer Hen Housing provides an introduction to academic literature on animal welfare campaigns and a quantitative, causal estimate of the impact of global cage-free campaigns. (Pre-registration is linked; working paper is available by request.)
I’ve long wanted to see a textbook on advocating for animals raised for food. Given the contents of Chapters 1, 4 & 6, I could see this project being transformed into a such a book. Currently the academic literature relevant to the many facets of animal advocacy is quite scattered and would benefit from careful synthesis. Some of this synthesis we’re working on at Rethink—it would be great to see it in book form! And I think such a book would be a very useful introductory text.
Very interesting results and glad to see more comprehensive research quantify people’s priors on the promise of a broad array of interventions! Will the anonymized raw data be made available for further analysis? I’d especially be interested to see a clustering analysis of where people fall on these issues.