Iâm the principal investigator of the Humane and Sustainable Food Lab at Stanford University. You can give me feedback anonymously.
MMathurđ¸
Seth and Benny, many thanks for this extremely interesting and thought-provoking piece. This is a major contribution to the field. It is especially helpful to have the quantitative meta-analyses and meta-regressions; the typically low within-study power in this literature can obscure the picture in some other reviews that just count significant studies. Itâs also heartening to see how far this literature has come in the past few years in terms of measuring objective outcomes.
A few thoughts and questions:
1.) The meta-regression on self-reported vs. objectively measured outcomes is very interesting and, as you say, a little counter-intuitive. In a previous set of RCTs (Mathur 2021 in the forest plot), we found suggestive evidence of strong social desirability bias in the context of an online-administered documentary intervention. There, we only considered self-reported outcomes, but compared two types of outcomes: (1) stated intentions measured immediately (high potential for social desirability bias); vs. (2) reported consumption measured after 2 weeks (lower potential for social desirability bias). In light of your results, it could be that ours primarily reflected effects decaying over time, or genuine differences between intentions and behavior, more than pure social desirability bias. Methodologically, I think your findings point to the importance of head-to-head comparisons of self-reported vs. objective outcomes in studies that are capable of measuring both. If these findings continue to suggest little difference between these modes of outcome measurement, that would be great news for interpreting the existing literature using self-report measures and for doing future studies on the cheap, using self-report.
2.) Was there a systematic database search in addition to the thorough snowballing and manual searches? I kind of doubt that you would have found many additional studies this way, but this seems likely to come up in peer review if the paper is described as a systematic review.
3.) Very minor point: I think the argument about Glass delta = 0.3 corresponding to a 10% reduction in MAP consumption is not quite right. For a binary treatment X and continuous outcome Y, the relationship between Cohenâs d (not quite the same as Glass, as you say) and Pearsonâs r is given by d = 2r /â sqrt(1-r^2), such that d = 0.3 corresponds to r^2 (proportion of variance explained) = 0.02. Even so, the 2% of variation explained does not necessarily mean a 2% reduction in Y itself. Since Glass standardizes by only the control group SD, the same relationship will hold under equal SDs between the treatment and control group, and otherwise I do not think there will be a 1-1 relationship between delta and r.
Again, congratulations on this very well-conducted analysis, and best of luck with the journal submissions. I am very glad you are pursuing that.
I want to debate patient philanthropy and the assumptions under which it makes sense!
Wow, what an exciting opportunity and academic lab! Congratulations and best of luck with your work.
Agreed! We partner with the Menus of Change University Research Collaborativeâa 74-university consortiumâso plan to scale up our studies at Stanford in a multisite replication design.
Thanks, Fai!
Congratulations!
Great podcast! I enjoyed your episode on Swiss legislative efforts. Given how great your title and logo are, consider selling swag? ;)
Very coolâwhere can we read more about your work with retail interventions?
Weâll post big updates here on the EA forum, and news and publications will go on our website. Soon I hope to have more personnel bandwidth for social media, but this is a wish-list item for now.
Thanks, Emre â Iâm super excited, too.
InÂtroÂducÂing StanÂfordâs new HuÂmane & SusÂtainÂable Food Lab
Excellent post! Regarding fellowships and scholarships within academia, I would also suggest offering pre-PhD fellowships similar to NSF, NDSEG, or Hertz, which support a studentâs full grad school tuition. The stipulation would be that the studentâs dissertation would need to be related to animal welfare-related topics, which is similar to how NIH training grants in the USA are already structured. A similar model could work for postdoctoral fellowships.
This would have the following benefits:
Winning a competitive fellowship pre-PhD looks great on a studentâs CV and can help them get into grad school and find an excellent advisor.
In many academic departments in the US, it can be hard for even well-funded faculty members to take on students to work on animal welfare topics, because their existing funding is earmarked for other topics.
Related to the above, most funding for animal welfare research is unfortunately tied to specific projects, making it hard for faculty to find funding for training students on these topics.
Regarding encouraging faculty to work on animal welfare topics, establishing less restricted funding sources (i.e., earmarked for animal welfare research, but not tied to a specific project) for faculty with strong track records of working in this area would improve substantially on the current model and incentives.
Hi Seth,
Thanks so much for the thoughtful and interesting response, and Iâm honored to hear that the 2021 papers helped lead into this. Cumulative science at work!
I fully agree. Our study was at best comparing a measure with presumably less social desirability bias to one with presumably more, and lacked any gold-standard benchmark. In any case, it was also only one particular intervention and setting. I think your proposed exercise of coding attitude and intention measures for each study would be very valuable. A while back, we had tossed around some similar ideas in my lab. Iâd be happy to chat offline about how we could try to help support you in this project, if that would be helpful.
Makes sense.
For binary outcomes, yes, I think your analog to delta is reasonable. Often these proportion-involving estimates are not normal across studies, but thatâs easy enough to deal with using robust meta-analysis or log-transforms, etc. I guess you approximated the variance of this estimate with the delta method or similar, which makes sense. For continuous outcomes, this actually was the case I was referring to (a binary treatment X and continuous outcome Y), since that is the setting where the d-to-r conversion I cited holds. Below is an MWE in R, and please do let me know if Iâve misinterpreted what you were proposing. I hope not to give the impression of harping on a very minor point â again, I found your analysis very thoughtful and rigorous throughout; Iâm just indulging a personal interest in effect-size conversions.
Thanks again, Seth!
Maya