Meaningfully reducing meat consumption is an unsolved problem: meta-analysis

This post summarizes the main findings of a new meta-analysis from the Humane and Sustainable Food Lab. We analyze the most rigorous randomized controlled trials (RCTs) that aim to reduce consumption of meat and animal products (MAP). We conclude that no theoretical approach, delivery mechanism, or persuasive message should be considered a well-validated means of reducing MAP consumption. By contrast, reducing consumption of red and processed meat (RPM) appears to be an easier target. However, if RPM reductions lead to more consumption of other MAP like chicken and fish, this is likely bad for animal welfare and doesn’t ameliorate zoonotic outbreak or land and water pollution. We also find that many promising approaches await rigorous evaluation.

This post updates a post from a year ago. We first summarize the current paper, and then describe how the project and its findings have evolved.

What is a rigorous RCT?

There is no consensus, either in our field or between fields, about what counts as a valid, informative design, but we operationalize “rigorous RCT” as any study that:

  • Randomly assigns participants to a treatment and control group

  • Measures consumption directly—rather than (or in addition to) attitudes, intentions, or hypothetical choices—at least a single day after treatment begins

  • Has at least 25 subjects in both treatment and control, or, in the case of cluster-assigned studies (e.g. university classes that all attend a lecture together or not), at least 10 clusters in total.

Additionally, studies needed to intend to reduce MAP consumption, rather than (e.g.) encouraging people to switch from beef to chicken, and be publicly available by December 2023.

We found 35 papers, comprising 41 studies and 112 interventions, that met these criteria. 18 of 35 papers have been published since 2020.

The main theoretical approaches:

Broadly speaking, studies used Persuasion, Choice Architecture, Psychology, and a combination of Persuasion and Psychology to try to change eating behavior.

Persuasion studies typically provide arguments about animal welfare, health, and environmental welfare reasons to reduce MAP consumption. For instance, Jalil et al. (2023) switched out a typical introductory economics lecture for one on the health and environmental reasons to cut back on MAP consumption, and then tracked what students ate at their college’s dining halls. Animal welfare appeals often used materials from advocacy organizations and were often delivered through videos and pamphlets. Most studies in our dataset are persuasion studies.

Choice architecture studies change aspects of the contexts in which food is selected and consumed to make non-MAP options more appealing or prominent. For example, Andersson and Nelander (2021) randomly alter whether the vegetarian option occurs on the top of a university cafeteria’s billboard menu or not. Choice architecture approaches are very common in the broader food literature, but only two papers met our inclusion criteria; hypothetical outcomes and/​or immediate measurement were common reasons for exclusion.

Psychology studies manipulate the interpersonal, cognitive, or affective factors associated with eating MAP. The most common psychological intervention is centered on social norms seeking to alter the perceived popularity of non-MAP dishes, e.g. two studies by Gregg Sparkman and colleagues. In another study, a university cafeteria put up signs stating that “[i]n a taste test we did at the [name of cafe], 95% of people said that the veggie burger tasted good or very good!” One study told participants that people who ate meat are more likely to endorse social hierarchy and embrace human dominance over nature. Other psychological interventions include response inhibition training, where subjects are trained to avoid responding impulsively to stimuli such as unhealthy food, and implementation intentions, where participants list potential challenges and solutions to changing their own behavior.

Finally, some studies combined persuasive and psychological messages, e.g. putting up a sign about how veggie burgers are popular along with a message about their environmental benefits, or combining reasons to cut back on MAP consumption along with an opportunity to pledge to do so.

Results: consistently small effects

We convert all reported results to a measure of standardized mean differences (SMD) and meta-analyze them using the robumeta package in R. An SMD = 1 indicates an average change equal to one standard deviation.

Our overall pooled estimate is SMD = 0.07 (95% CI: [0.02, 0.12]). Table 1 displays effect sizes separated by theoretical approach and by type of persuasion.

Most of these effect sizes and upper confidence bounds are quite small. The largest effect size, which is associated with choice architecture, comes from too few studies to say anything meaningful about the approach in general.

Table 2 presents results associated with different study characteristics. Note that these meta-regression estimates are not causal estimates of the effect of a study characteristic because characteristics were not randomly assigned.

Probably the most striking result here is the comparatively large effect size associated with studies aimed at reducing RPM consumption (SMD = 0.25, 95% CI: [0.11, 0.38]). We speculate that reducing RPM consumption is generally perceived as easier and more socially normative than cutting back on all categories of MAP. (It’s not hard to find experts in newspapers saying things like: “Who needs steak when there’s bacon and fried chicken?”)

Likewise, when we integrate a supplementary dataset of 22 marginal studies, comprising 35 point estimates, that almost met our inclusion criteria, we find a considerably larger pooled effect: SMD = 0.2 (95% CI: [0.09, 0.31]). Unfortunately, this suggests that increased rigor is associated with smaller effect sizes in this literature, and that prior literature reviews which pooled a wider variety of designs and measurement strategies may have produced inflated estimates.

Where do we go from here?

When we talk to EAs, we find that they generally accept the idea that behavioral change, particularly around something as ingrained as meat, is a hard problem. But if you read the food literature in general, you might get a different impression: of consumers who are easily influenced by local cues and whose behaviors are highly malleable. For instance, studies that set the default meal choice to be vegetarian at university events sometimes find large effects. But what happens at the next meal, or the day after? Do people eat more meat to compensate? For the most part, we don’t know, although it is definitely possible to measure delayed effects.

Likewise, we encourage researchers to think clearly about the difference between reducing all MAP consumption and reducing just some particular category of it. RPM is of special concern for its environmental and health consequences, but if you care about animal welfare, a society-wide switch from beef to chicken is probably a disaster.

On a brighter note, we reviewed a lot of clever, innovative designs that did not meet our inclusion criteria, and we’d love to see these ideas implemented with more rigorous evaluation:

For more, see the paper, our supplement, and our code and data repository.

How has this project changed over time?

Our previous post, describing an earlier stage of this project, reported that environmental and health appeals were the most consistently effective at reducing MAP consumption. However, at that time, we were grouping RPM and MAP studies together. Treating them as separate estimands changed our estimates a lot (and pretty much caused the paper to fall into place conceptually).

Second, we’ve analyzed a lot more literature. In the data section of our code and data repository, you’ll see CSVs that record of all the studies we included in our main analysis; our RPM analysis; a robustness check of studies that didn’t quite make it; the 150+ prior reviews we consulted; and the 900+ studies we excluded.

Third, Maya Mathur joined the project, and Seth joined Maya’s lab (more on that journey here). Our statistical analyses, and everything else, improved accordingly.

Happy to answer any questions!