Do you think your study is sufficiently well powered to detect very small effect sizes on meat consumption? It seems plausible that effects on meat consumption would be very small in expectation plus many people would not reduce meat no matter what, so you may be needing to detect a small shift within a small subpoulation.
It looks like you have 80% power to detect an effect size of d=0.24 - which is actually substantially larger than the effects we usually find for animal interventions even on more moveable things like attitudes/signing a petition/agreeing that “factory farms aren’t great”. Their null result on effect on meat consumption was not at all tightly bounded: −0.3oz [-6.12oz to + 5.46oz]
So I think the different results here seem possibly explained just by the fact that you could find effects on the moveable attitudes but were underpowered to detect differences in meat consumption.
I’d be curious to estimate what effect size would we be looking at if say 3-5% of people stopped eating meat (an optimistic estimate IMO).
This is perhaps further confounded by a large amount of probable noise—how good are people at estimates oz of meat eaten in different time periods, is oz of meat something that is distributed in a way to corresponds to what a t-test is assessing?
Thanks for these Peter! (Note that Peter and I both work at Rethink Priorities.)
Do you think your study is sufficiently well powered to detect very small effect sizes on meat consumption?
No, and this is by design as you point out. We did try to recruit a population that may be more predisposed to change in Study 3 and looked at even more predisposed subgroups.
substantially larger than the effects we usually find for animal interventions even on more moveable things
I think we were informed by the results of our meta-analysis, which generally found effects around this size for meat reduction interventions.
Their null result on effect on meat consumption was not at all tightly bounded: −0.3oz [-6.12oz to + 5.46oz]
Obviously, this is ultimately subjective, but this corresponds to plus or minus a burger per week, which seems reasonably precise to me. The standardized CI is [−0.17, 0.15], so bounded below a ‘small effect’. And, as David points out, less stringent CIs would look even better. But to be clear, I don’t have a substantive disagreement here—just a matter of interpretation.
For even more power, we could combine studies 1 & 3 in a meta-analysis (doubling the sample size). Study 3 found a treatment effect of−1.72 oz/week; 95% CI: [−8.84,5.41], so the meta-analytic estimate would probably be very small but still in the correct direction, with tighter bounds of course.
explained just by the fact that you could find effects on the moveable attitudes
Just to clarify, we measured attitudes in all 3 studies. We found an effect on intentions in Study 2 where there wasn’t blinding and follow-up was immediate. Studies 3 & 4 (likely) didn’t find effects on attitudes.
I’d be curious to estimate what effect size would we be looking at if say 3-5% of people stopped eating meat (an optimistic estimate IMO).
Just roughly taking David Reinstein’s number of 80 oz per week (could use our control group’s mean for a better estimate) and assuming no other changes, 1% abstention would give a 0.8 oz effect size and 5% 4 oz. So definitely under-powered for the low end, but potentially closer to detectable at the high end. (And keeping in mind this is at 12-day follow-up; we should expect that 1% to dwindle further at longer follow-up. With figures this low I would be pessimistic for the overall impact. But keep in mind other successful meat reduction interventions don’t seem to have worked mostly through a few individuals totally abstaining!)
corresponds to what a t-test is assessing
I wouldn’t expect issues in testing the difference in means given our samples sizes. But otherwise not sure what you’re suggesting here.
Do you think your study is sufficiently well powered to detect very small effect sizes on meat consumption? It seems plausible that effects on meat consumption would be very small in expectation plus many people would not reduce meat no matter what, so you may be needing to detect a small shift within a small subpoulation.
It looks like you have 80% power to detect an effect size of d=0.24 - which is actually substantially larger than the effects we usually find for animal interventions even on more moveable things like attitudes/signing a petition/agreeing that “factory farms aren’t great”. Their null result on effect on meat consumption was not at all tightly bounded: −0.3oz [-6.12oz to + 5.46oz]
So I think the different results here seem possibly explained just by the fact that you could find effects on the moveable attitudes but were underpowered to detect differences in meat consumption. I’d be curious to estimate what effect size would we be looking at if say 3-5% of people stopped eating meat (an optimistic estimate IMO).
This is perhaps further confounded by a large amount of probable noise—how good are people at estimates oz of meat eaten in different time periods, is oz of meat something that is distributed in a way to corresponds to what a t-test is assessing?
Thanks for these Peter! (Note that Peter and I both work at Rethink Priorities.)
No, and this is by design as you point out. We did try to recruit a population that may be more predisposed to change in Study 3 and looked at even more predisposed subgroups.
I think we were informed by the results of our meta-analysis, which generally found effects around this size for meat reduction interventions.
Obviously, this is ultimately subjective, but this corresponds to plus or minus a burger per week, which seems reasonably precise to me. The standardized CI is [−0.17, 0.15], so bounded below a ‘small effect’. And, as David points out, less stringent CIs would look even better. But to be clear, I don’t have a substantive disagreement here—just a matter of interpretation.
For even more power, we could combine studies 1 & 3 in a meta-analysis (doubling the sample size). Study 3 found a treatment effect of−1.72 oz/week; 95% CI: [−8.84,5.41], so the meta-analytic estimate would probably be very small but still in the correct direction, with tighter bounds of course.
Just to clarify, we measured attitudes in all 3 studies. We found an effect on intentions in Study 2 where there wasn’t blinding and follow-up was immediate. Studies 3 & 4 (likely) didn’t find effects on attitudes.
Just roughly taking David Reinstein’s number of 80 oz per week (could use our control group’s mean for a better estimate) and assuming no other changes, 1% abstention would give a 0.8 oz effect size and 5% 4 oz. So definitely under-powered for the low end, but potentially closer to detectable at the high end. (And keeping in mind this is at 12-day follow-up; we should expect that 1% to dwindle further at longer follow-up. With figures this low I would be pessimistic for the overall impact. But keep in mind other successful meat reduction interventions don’t seem to have worked mostly through a few individuals totally abstaining!)
I wouldn’t expect issues in testing the difference in means given our samples sizes. But otherwise not sure what you’re suggesting here.