Iâm the principal investigator of the Humane and Sustainable Food Lab at Stanford University. You can give me feedback anonymously.
MMathurđ¸
Thanks, Vasco!
On the mission statement: Ultimately, yes, our goal is to end farmed animal suffering, and while we are in favor of interventions that improve animal welfare (and I personally am a THL donor), our own research focus is on displacing demand that necessitates CAFOs in the first place. I enjoyed reading your net-positive lives post earlier, and it was a significant update for me. I do, however, think that estimating the threshold between a net-positive and net-negative life seems really hard even for humans (and maybe even for oneself!), let alone other species, so I would be very wary of entrenching CAFOs on the assumption that the lives are net-positive. I am also not personally sold on total utilitarianism, so I am not sure that I want a repugnant-conclusion situation with CAFOs, even if we were certain the lives were minimally net-positive.
On the animal-welfare modeling: What you suggest would be the gold-standard approach, but we need to strike a balance between complexity and comprehensibility/âcredibility for implementation at Stanford and other universities. We are going to start with much simpler arguments about the total number of animals affected. This is already a big jump, since currently, the university occasionally considers welfare standards (e.g., cage-free) but never the number of animals (hence, much more chicken is now served than in the past to reduce GHG emissions). There are also limitations on the granularity of food procurement data the suppliers give us (e.g., fish species are often unclear). We do plan to do our own sensitivity analyses using more rigorous weights.
Have you considered having an open, competitive process to submit talk abstracts, similarly to academic conferences?
Congratulations, Bob!
Thanks for sharing this candid account of your journey, Seth! How fortunate for the HSF Lab that your path landed you with us. Even though the trip was roundabout, it gave you a breadth and depth of skills, and worldliness, that doesnât come rolled up inside a PhD diploma.
Start by doing work youâre invested in, that youâre proud of, and people may notice.
This is fantastic advice.
Yes, good point
Agreed, though if their model isnât correctly specified to identify the causal effect on meat (which I agree is tough here), then presumably the effects on plant-based sales would also be suspect.
Good point. This does provide a bit more evidence that itâs not just due to other January shocks that were happening regularly before Veganuary started.
Thereâs this in the abstract:
âAverage weekly unit sales of plant-based products increased significantly (57 %) during the intervention period (incidence rate ratio (IRR) 1¡52 (95 % CI1¡51, 1¡55)). Plant-based product sales decreased post-intervention but remained 15 % higher than pre-intervention (IRR 1¡13 (95 % CI 1¡12, 1¡14)). There was no significant change in meat sales according to time period. The increase in plant-based product sales was greatest at superstores (58 %), especially those located in below average affluence areas (64 %).â
I think this is pretty bad news, actually.
Here we have an intervention that apparently increases sales of plant-based products by 57% and yet does not decrease sales of meat products at all. Unfortunately, this corroborates a growing body of evidence suggesting that plant-based products often fail to displace meat products, even when they gain their own (orthogonal) market share.
As an aside, even with the effects on plant-based products, itâs also hard to attribute causation to Veganuary specifically, since it always occurs during a month that we know is associated with unusual recurring âshocksâ (e.g., the end of holiday dinner parties; the beginning of New Yearâs resolutions).
- 27 Jan 2024 20:53 UTC; 6 points) 's comment on VeÂganÂuaryâs imÂpact has been huge â here are the stats to prove it by (
Thanks for putting together this excellent list, Martin and Emma. This is just in time for my labâs upcoming brainstorming session on future research priorities.
No worries. Effect-size conversions are very confusing. Thanks for doing this important project and for the exchange!
Hi Seth,
Thanks so much for the thoughtful and interesting response, and Iâm honored to hear that the 2021 papers helped lead into this. Cumulative science at work!
I fully agree. Our study was at best comparing a measure with presumably less social desirability bias to one with presumably more, and lacked any gold-standard benchmark. In any case, it was also only one particular intervention and setting. I think your proposed exercise of coding attitude and intention measures for each study would be very valuable. A while back, we had tossed around some similar ideas in my lab. Iâd be happy to chat offline about how we could try to help support you in this project, if that would be helpful.
Makes sense.
For binary outcomes, yes, I think your analog to delta is reasonable. Often these proportion-involving estimates are not normal across studies, but thatâs easy enough to deal with using robust meta-analysis or log-transforms, etc. I guess you approximated the variance of this estimate with the delta method or similar, which makes sense. For continuous outcomes, this actually was the case I was referring to (a binary treatment X and continuous outcome Y), since that is the setting where the d-to-r conversion I cited holds. Below is an MWE in R, and please do let me know if Iâve misinterpreted what you were proposing. I hope not to give the impression of harping on a very minor point â again, I found your analysis very thoughtful and rigorous throughout; Iâm just indulging a personal interest in effect-size conversions.
Thanks again, Seth!
Maya
library(dplyr)
# sample size
N = 10^5
# population parameter
delta = .3
# assume same SD conditional on X=0 and X=1 so that Glass = Cohen
sd.within = .5
# E[Y | X=0] and E[Y | X=1]
m0 = .5
m1 = m0 + delta*sd.within
# generate data
d = data.frame( X = c( rep(0, N/â2), rep(1, N/â2) ) )
d$Y[ d$X == 0 ] = rnorm(n = sum(d$X == 0), mean = m0, sd = sd.within )
d$Y[ d$X == 1 ] = rnorm(n = sum(d$X == 1), mean = m1, sd = sd.within )
# sanity check
d %>% group_by(X) %>%
summarise( mean(Y) )
hist(d$Y)
# check the conversion
r = cor(d$Y, d$X)
(2*r) /â sqrt(1 - r^2); delta
# proportion of variance explained
r^2
# percent reduction in E[Y] itself
# note this is not 1-1 with delta; change m0 and this will change
100*( abs(m1 - m0) /â m0 )
Seth and Benny, many thanks for this extremely interesting and thought-provoking piece. This is a major contribution to the field. It is especially helpful to have the quantitative meta-analyses and meta-regressions; the typically low within-study power in this literature can obscure the picture in some other reviews that just count significant studies. Itâs also heartening to see how far this literature has come in the past few years in terms of measuring objective outcomes.
A few thoughts and questions:
1.) The meta-regression on self-reported vs. objectively measured outcomes is very interesting and, as you say, a little counter-intuitive. In a previous set of RCTs (Mathur 2021 in the forest plot), we found suggestive evidence of strong social desirability bias in the context of an online-administered documentary intervention. There, we only considered self-reported outcomes, but compared two types of outcomes: (1) stated intentions measured immediately (high potential for social desirability bias); vs. (2) reported consumption measured after 2 weeks (lower potential for social desirability bias). In light of your results, it could be that ours primarily reflected effects decaying over time, or genuine differences between intentions and behavior, more than pure social desirability bias. Methodologically, I think your findings point to the importance of head-to-head comparisons of self-reported vs. objective outcomes in studies that are capable of measuring both. If these findings continue to suggest little difference between these modes of outcome measurement, that would be great news for interpreting the existing literature using self-report measures and for doing future studies on the cheap, using self-report.
2.) Was there a systematic database search in addition to the thorough snowballing and manual searches? I kind of doubt that you would have found many additional studies this way, but this seems likely to come up in peer review if the paper is described as a systematic review.
3.) Very minor point: I think the argument about Glass delta = 0.3 corresponding to a 10% reduction in MAP consumption is not quite right. For a binary treatment X and continuous outcome Y, the relationship between Cohenâs d (not quite the same as Glass, as you say) and Pearsonâs r is given by d = 2r /â sqrt(1-r^2), such that d = 0.3 corresponds to r^2 (proportion of variance explained) = 0.02. Even so, the 2% of variation explained does not necessarily mean a 2% reduction in Y itself. Since Glass standardizes by only the control group SD, the same relationship will hold under equal SDs between the treatment and control group, and otherwise I do not think there will be a 1-1 relationship between delta and r.
Again, congratulations on this very well-conducted analysis, and best of luck with the journal submissions. I am very glad you are pursuing that.
I want to debate patient philanthropy and the assumptions under which it makes sense!
Wow, what an exciting opportunity and academic lab! Congratulations and best of luck with your work.
Agreed! We partner with the Menus of Change University Research Collaborativeâa 74-university consortiumâso plan to scale up our studies at Stanford in a multisite replication design.
Thanks, Fai!
Congratulations!
Great podcast! I enjoyed your episode on Swiss legislative efforts. Given how great your title and logo are, consider selling swag? ;)
I agree, although in many food-service settings other than grocery stores (e.g., restaurants and dining halls), there often are not multiple welfare options within a given animal product. Although overall societal demand could ultimately change the welfare options in those settings, those effects are longer-term and hard to estimate. However, we do have an upcoming project using grocery store data (which I forgot to write about...), and weâll explore whether we have the right data to look at welfare options. Thanks for suggesting!
Yes, but interpreting the latter part (decreasing total welfare by reducing the population) as bad hinges on total utilitarianism, and would otherwise be interpreted as neutral. If one puts low credence on total utilitarianism, then the possible downside of entrenching CAFOs with net-negative lives (which, as you allude, is bad on pretty much any ethical view) could ultimately outweigh the possibility that having lots of minimally net-positive lives is good (which is true primarily on total utiliarianism). I acknowledge that there is a lot of meta-ethical uncertainty here. The unfalsifiability of it all is so annoying.