Meaningfully reducing meat consumption is an unsolved problem: meta-analysis
This post summarizes the main findings of a new meta-analysis from the Humane and Sustainable Food Lab. We analyze the most rigorous randomized controlled trials (RCTs) that aim to reduce consumption of meat and animal products (MAP). We conclude that no theoretical approach, delivery mechanism, or persuasive message should be considered a well-validated means of reducing MAP consumption. By contrast, reducing consumption of red and processed meat (RPM) appears to be an easier target. However, if RPM reductions lead to more consumption of other MAP like chicken and fish, this is likely bad for animal welfare and doesn’t ameliorate zoonotic outbreak or land and water pollution. We also find that many promising approaches await rigorous evaluation.
This post updates a post from a year ago. We first summarize the current paper, and then describe how the project and its findings have evolved.
What is a rigorous RCT?
There is no consensus, either in our field or between fields, about what counts as a valid, informative design, but we operationalize “rigorous RCT” as any study that:
Randomly assigns participants to a treatment and control group
Measures consumption directly—rather than (or in addition to) attitudes, intentions, or hypothetical choices—at least a single day after treatment begins
Has at least 25 subjects in both treatment and control, or, in the case of cluster-assigned studies (e.g. university classes that all attend a lecture together or not), at least 10 clusters in total.
Additionally, studies needed to intend to reduce MAP consumption, rather than (e.g.) encouraging people to switch from beef to chicken, and be publicly available by December 2023.
We found 35 papers, comprising 41 studies and 112 interventions, that met these criteria. 18 of 35 papers have been published since 2020.
The main theoretical approaches:
Broadly speaking, studies used Persuasion, Choice Architecture, Psychology, and a combination of Persuasion and Psychology to try to change eating behavior.
Persuasion studies typically provide arguments about animal welfare, health, and environmental welfare reasons to reduce MAP consumption. For instance, Jalil et al. (2023) switched out a typical introductory economics lecture for one on the health and environmental reasons to cut back on MAP consumption, and then tracked what students ate at their college’s dining halls. Animal welfare appeals often used materials from advocacy organizations and were often delivered through videos and pamphlets. Most studies in our dataset are persuasion studies.
Choice architecture studies change aspects of the contexts in which food is selected and consumed to make non-MAP options more appealing or prominent. For example, Andersson and Nelander (2021) randomly alter whether the vegetarian option occurs on the top of a university cafeteria’s billboard menu or not. Choice architecture approaches are very common in the broader food literature, but only two papers met our inclusion criteria; hypothetical outcomes and/or immediate measurement were common reasons for exclusion.
Psychology studies manipulate the interpersonal, cognitive, or affective factors associated with eating MAP. The most common psychological intervention is centered on social norms seeking to alter the perceived popularity of non-MAP dishes, e.g. two studies by Gregg Sparkman and colleagues. In another study, a university cafeteria put up signs stating that “[i]n a taste test we did at the [name of cafe], 95% of people said that the veggie burger tasted good or very good!” One study told participants that people who ate meat are more likely to endorse social hierarchy and embrace human dominance over nature. Other psychological interventions include response inhibition training, where subjects are trained to avoid responding impulsively to stimuli such as unhealthy food, and implementation intentions, where participants list potential challenges and solutions to changing their own behavior.
Finally, some studies combined persuasive and psychological messages, e.g. putting up a sign about how veggie burgers are popular along with a message about their environmental benefits, or combining reasons to cut back on MAP consumption along with an opportunity to pledge to do so.
Results: consistently small effects
We convert all reported results to a measure of standardized mean differences (SMD) and meta-analyze them using the robumeta package in R. An SMD = 1 indicates an average change equal to one standard deviation.
Our overall pooled estimate is SMD = 0.07 (95% CI: [0.02, 0.12]). Table 1 displays effect sizes separated by theoretical approach and by type of persuasion.
Most of these effect sizes and upper confidence bounds are quite small. The largest effect size, which is associated with choice architecture, comes from too few studies to say anything meaningful about the approach in general.
Table 2 presents results associated with different study characteristics. Note that these meta-regression estimates are not causal estimates of the effect of a study characteristic because characteristics were not randomly assigned.
Probably the most striking result here is the comparatively large effect size associated with studies aimed at reducing RPM consumption (SMD = 0.25, 95% CI: [0.11, 0.38]). We speculate that reducing RPM consumption is generally perceived as easier and more socially normative than cutting back on all categories of MAP. (It’s not hard to find experts in newspapers saying things like: “Who needs steak when there’s bacon and fried chicken?”)
Likewise, when we integrate a supplementary dataset of 22 marginal studies, comprising 35 point estimates, that almost met our inclusion criteria, we find a considerably larger pooled effect: SMD = 0.2 (95% CI: [0.09, 0.31]). Unfortunately, this suggests that increased rigor is associated with smaller effect sizes in this literature, and that prior literature reviews which pooled a wider variety of designs and measurement strategies may have produced inflated estimates.
Where do we go from here?
When we talk to EAs, we find that they generally accept the idea that behavioral change, particularly around something as ingrained as meat, is a hard problem. But if you read the food literature in general, you might get a different impression: of consumers who are easily influenced by local cues and whose behaviors are highly malleable. For instance, studies that set the default meal choice to be vegetarian at university events sometimes find large effects. But what happens at the next meal, or the day after? Do people eat more meat to compensate? For the most part, we don’t know, although it is definitely possible to measure delayed effects.
Likewise, we encourage researchers to think clearly about the difference between reducing all MAP consumption and reducing just some particular category of it. RPM is of special concern for its environmental and health consequences, but if you care about animal welfare, a society-wide switch from beef to chicken is probably a disaster.
On a brighter note, we reviewed a lot of clever, innovative designs that did not meet our inclusion criteria, and we’d love to see these ideas implemented with more rigorous evaluation:
Activating moral and/or physical disgust
Watching popular media such as the Simpsons episode “Lisa the Vegetarian” or the movie Babe
For more, see the paper, our supplement, and our code and data repository.
How has this project changed over time?
Our previous post, describing an earlier stage of this project, reported that environmental and health appeals were the most consistently effective at reducing MAP consumption. However, at that time, we were grouping RPM and MAP studies together. Treating them as separate estimands changed our estimates a lot (and pretty much caused the paper to fall into place conceptually).
Second, we’ve analyzed a lot more literature. In the data section of our code and data repository, you’ll see CSVs that record of all the studies we included in our main analysis; our RPM analysis; a robustness check of studies that didn’t quite make it; the 150+ prior reviews we consulted; and the 900+ studies we excluded.
Third, Maya Mathur joined the project, and Seth joined Maya’s lab (more on that journey here). Our statistical analyses, and everything else, improved accordingly.
Happy to answer any questions!
Thanks so much for this very helpful post!
I’m a bit confused about your framing of the takeaway. You state that “reducing meat consumption is an unsolved problem” and that “we conclude that no theoretical approach, delivery mechanism, or persuasive message should be considered a well-validated means of reducing meat and animal product consumption.” However, the overall pooled effects across the 41 studies show statistical significance w/ a p-value of <1%. Yes, the effect size is small (0.07 SMD) but shouldn’t we conclude from the significance that these interventions do indeed work?
Having a small effect or even a statistically insignificant one isn’t something EAs necessarily care about (e.g. most longtermism interventions don’t have much of an evidence base). It’s whether we can have an expected positive effect that’s sufficiently cheap to achieve. In Ariel’s comment, you point to a study that concludes its interventions are highly cost-effective at ~$14/ton of CO2eq averted. That’s incredible given many offsets cost ~$100/ton or more. So it doesn’t matter if the effect is ‘small’, only that it’s cost-effective.
Can you help EA donors take the necessary next step? It won’t be straightforward and will require additional cost and impact assumptions, but it’ll be super useful if you can estimate the expected cost-effectiveness of different diet-change interventions (in terms of suffering alleviated).
Finally, in addition to separating out red meat from all animal product interventions, I suspect it’ll be just as useful to separate out vegetarian from vegan interventions. It should be much more difficult to achieve persistent effects when you’re asking for a lot more sacrifice. Perhaps we can get additional insights by making this distinction?
Hi Wayne,
Great questions, I’ll try to give them the thoughtful treatment they deserve.
We don’t place much (any?) credence in the statistical significance of the overall result, and I recognize that a lot of work is being done by the word “meaningfully” in “meaningfully reducing.” For us, changes on the order of a few percentage points—especially given relatively small samples & vast heterogeneity of designs and contexts (hence our point about “well-validated”—almost nothing is directly replicated out of sample in our database) -- are not the kinds of transformational change that others in this literature have touted. Another way to slice this, if you were looking to evaluate results based on significance, is to look at how many results are, according to their own papers statistical nulls: 95 out of 112, or about 85%. (On the other hand, may of these studies might be finding small but real effects but not just be sufficiently powered to identify them: If you expect d > 0.4 because you read past optimistic reviews, an effect of d = 0.04 is going to look like a null, even if real changes are happening). So my basic conclusion is that marginal changes probably are possible, so in that sense, yes, many of these interventions probably “work,” but I wouldn’t call the changes transformative. I think the proliferation of GLP-1 drugs is much more likely to be transformative.
It’s true that cost-effectiveness estimates might still be very good even if the results are small. If there was a way to scale up the Jalil et al. intervention, I’d probably recommend it right away. But I don’t know of any such opportunity. (It requires getting professors to substitute out a normal economics lecture for one focused on meat consumption, and we’d probably want at least a few other schools to do measurement to validate the effect, and my impression from talking to the authors is that measurement was a huge lift). I also think that choice architecture approaches are promising and awaiting a new era of evaluation. My lab is working on some of these; for someone interested in supporting the evaluation side of things, donating to the lab might be a good fit.
This is in the supplement rather than the paper, but one of our depressing results is that rigorous evaluations published by nonprofits, such as The Humane League, Mercy For Animals, and Faunalytics, produce a small backlash on average (see table below). But it’s also my impression that a lot of these groups have changed gears a lot, and are now focusing less on (e.g.) leafletting and direct persuasion efforts and more on corporate campaigns, undercover investigations, and policy work. I don’t know if they have moved this direction specifically because a lot of their prior work was showing null/backlash results, but in general I think this shift is a good idea given the current research landscape.
4. Pursuant to that, economists working on this sometimes talk about the consumer-citizen gap, where people will support policies that ban practices whose products they’ll happily consume. (People are weird!) For my money, if I were a significant EA donor on this space, I might focus here: message testing ballot initiatives, preparing for lengthy legal battles, etc. But as always with these things, the details matter. If you ban factory farms in California and lead Californians to source more of their meat from (e.g.) Brazil, and therefore cause more of the rainforest to be clearcut—well that’s not obviously good either.
5. Almost all interventions in our database targeted meat rather than other animal products (one looked at fish sauce and a couple also measured consumption of eggs and dairy). Also a lot of studies just say the choice was between a meat dish and a vegetarian dish, and whether that vegetarian dish contained eggs or milk is sometimes omitted. But in general, I’d think of these as “less meat” interventions.
Sorry I can’t offer anything more definitive here about what works and where people should donate. An economist I like says his dad’s first rule of social science research was: “Sometimes it’s this way, and sometimes it’s that way,” and I suppose I hew to that 😃
Thanks for this research! Do you know whether any BOTECs have been done where an intervention can be said to create X vegan-years per dollar? I’ve been considering writing an essay pointing meat eaters to cost-effective charitable offsets for meat consumption. So far, I haven’t found any rigorous estimates online.
(I think farmed animal welfare interventions are likely even more cost-effective and have a higher probability of being net positive. But it seems really difficult to know how to trade off the moral value of chickens taken out of cages / shrimp stunned versus averting some number of years of meat consumption.)
👋 Our pleasure!
To the best of my recollection, the only paper in our dataset that provides a cost-benefit estimation is Jalil et al. (2023)
There’s also a red/processed meat study—Emmons et al. (2005) --- that does some cost-effectiveness analyses, but it’s almost 20 years old and its reporting is really sparse: changes to the eating environment “were not reported in detail, precluding more detailed analyses of this intervention.” So I’d stick with Jalil et al. to get a sense of ballpark estimates.
Thanks for sharing! Great work.
Agreed:
I believe the thing that people would be willing to change their behaviour most for is feeling in-group. Eg, when people know that they are expected to do X, and people around them will know if they do not. But that is very hard to implement.
Agreed that it’s hard to implement: much easier to say “vegetarian food is popular at this cafe!′ than to convince people that they are expected to eat vegetarian.
See here for a review of the ‘dynamic norms’ part of this literature (studies that tell people that vegetarianism is growing in popularity over time): https://osf.io/preprints/psyarxiv/qfn6y
Thank you so much for this research. Is there a more intuitive way to interpret SMD values? For example, how many standard deviations is an average vegetarian away from the average person in the general population?
Thank you for your kind words!
putting SMDs into sensible terms is a continual struggle. I don’t think it’ll be easy to put vegetarians and meat eaters on a common scale because if vegetarians are all clustered around zero meat consumption, then the distance between vegs and meat eaters is just entirely telling you how much meat the meat eater group eats, and that changes a lot between populations.
Also, different disciplines have different ideas about what a ‘big’ effect size is. Andrew Gelman writes something I like about this:
But by convention, an SMD of 0.5 is typically just considered a ‘medium’ effect. I tend to agree with Gelman that changing people’s behavior by half a standard deviation on average is huge.
A different approach: here are a few studies, their main findings in normal terms, the SMD that translates to, and whether subjectively that’s considered big or small
Jalil et al. (2023) | 5.6% reduction in meat eaten |SMD = 0.11 | small (but well-measured and has lasting effects)
Camp * Lawrence (2019) |self-reported decrease on meat items FFQ of .28 points | SMD = 0.4 | moderate/large
Andersson & Nelander (2021 | 1% decrease in vegetarian meals sold | SMD = 0.16 | small (but, again, well measured)
So, for instance, the absolute change in the third study is a lot smaller than the absolute change in the first but has a bigger SMD because there’s less variation in the dependent variable in that setting.
So anyway this is another hard problem. But in general, nothing here is equating to the kind of radical transformation that animal advocates might hope for.
Do you have a sense of the acceptability rates (i.e. what proportions of the treatment population moderately decreased their meat consumption)? Additionally, how did you account for selection effects (i.e. if a study includes vegetarians, those participants presumably wouldn’t see behaviour change)?
My mental model right now is that some small proportion of Western populations are amenable to meat reductions, with a sharp fall-off after this. Using these techniques on less aware populations might work, but we could assume that most high-income Western populations have already been exposed to these techniques and made up their minds. Averaged over a study, seeing a handful of participants change their minds in moderate ways would show a small effect size, or none at all, depending on the recruited population.
But I know very little about this area, so I assume the above is wrong. I just wanted to know in what ways, and what’s borne out by the data you have.
👋 Great questions!
Most studies in our dataset don’t report these kinds of fine-grained results, but in general my impression from the texts is that the typical study gets a lot of people to change their behavior a little. (In part because if they got people to go vegan I expect they would say that.)
Some studies deliberately exclude vegetarians as part of their recruitment process, but most just draw from whatever population at large. Somewhere between 2 and 5% of people identify as vegetarians (and many of them eat meat sometimes), so I don’t personally worry too much about this curtailing results. A few studies specifically recruit people who are motivated to change their diets and/or help animals, e.g. Cooney (2016) recruited people who wanted to help Mercy for Animals evaluate its materials.
I think this is a fair mental model, but I think one of the main open questions of our paper is about how do we recruit people to cut back on meat in general vs. just cutting back on a few categories, e.g. red and processed meat. So I guess my mental model is that most people have heard that raising cows is bad for the environment and those who are cutting back are substituting partly to plant-based substitutes (reps from Impossible Foods noted at a recent meeting that most of their customers also eat meat) and partly to chicken and fish, e.g. the Mayo Clinic’s page on heart-healthy diets suggests “Lean meat, poultry and fish; low-fat or fat-free dairy products; and eggs are some of the best sources of protein...Fish is healthier than high-fat meats”, although it also says that “Eating plant protein instead of animal protein lowers the amounts of fat and cholesterol you take in.”
So I’d say we still have a lot of open questions...
Thanks for sharing the details of this research - it is very valuable towards arriving at an accurate assessment of various interventions.
One question with regard to the methodology of these RCTs is when and for how long did they record the consumption pattern of the participants following the intervention? Specifically, do we have any insights on short-term vs long-term impact of such interventions focused on behavioral change?
Also, I understand that you report the results as SMD. However, it is quite likely that there is a small minority in the treatment group in these interventions that probably contribute to most of the difference that is observed. Do we know anything about the percentage of individuals who are likely to make considerable changes to their dietary patterns based on these interventions?
Hi there,
Delays run the gamut. Jalil et al (2023) measure three years worth of dining choices, Weingarten et al. A few weeks; other studies are measuring what’s eaten at a dining hall during treatment and control but with no individual outcomes; and other studies are structured recall tasks like 3/7/30 days after treatment they ask people to say what they ate in a 24 hour period or over a given week. We did a bit of exploratory work on the relationship between length of delay and outcome size and didn’t find anything interesting.
I’m afraid we don’t know that overall. A few studies did moderator analysis where they found that people who scored high on some scale or personality factor tended to reduce their MAP consumption more, but no moderator stood out to us as a solid predictor here. Some studies found that women seem more amenable to messaging interventions, based on the results of Piester et al. 2020 and a few others, but some studies that exclusively targeted women found very little. I think gendered differences are interesting here but we didn’t find anything conclusive.
Executive summary: A meta-analysis of randomized controlled trials finds no well-validated approaches for reducing overall meat and animal product consumption, though reducing specifically red and processed meat consumption shows more promise.
Key points:
Analysis of 35 papers (41 studies, 112 interventions) shows very small overall effects (SMD = 0.07) for reducing meat consumption
Main intervention approaches tested: Persuasion, Choice Architecture, Psychology, and combinations—none showed strong effectiveness
Red/processed meat reduction specifically showed larger effects (SMD = 0.25) than general meat reduction, but may lead to increased chicken/fish consumption
More rigorous studies tend to show smaller effects than less rigorous ones, suggesting previous literature may overestimate intervention effectiveness
Many promising approaches (e.g., extended contact with farm animals, price manipulations, disgust activation) await rigorous evaluation
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.