For this reason and the other caveats you gave, it seems like it would be better to frame these as loose upper bounds on the expected effect, rather than point estimates. I get the feeling people often forget the caveats and circulate conclusions like “This study shows that $1 donations to newspaper ads save 3.1 chickens on average”.
I continue to question whether these studies are worthwhile. Even if it did not find significant differences between the treatments and control, it’s not as if we’re going to stop spreading pro-animal messages. And it was not powered to detect the treatment differences in which you are interested. So it seems it was unlikely to be action-guiding from the start. And of course there’s no way to know how much of the effect is explained by social desirability bias.
Thanks for writing this up.
The estimated differences due to treatment are almost certainly overestimates due to the statistical significance filter (http://andrewgelman.com/2011/09/10/the-statistical-significance-filter/) and social desirability bias.
For this reason and the other caveats you gave, it seems like it would be better to frame these as loose upper bounds on the expected effect, rather than point estimates. I get the feeling people often forget the caveats and circulate conclusions like “This study shows that $1 donations to newspaper ads save 3.1 chickens on average”.
I continue to question whether these studies are worthwhile. Even if it did not find significant differences between the treatments and control, it’s not as if we’re going to stop spreading pro-animal messages. And it was not powered to detect the treatment differences in which you are interested. So it seems it was unlikely to be action-guiding from the start. And of course there’s no way to know how much of the effect is explained by social desirability bias.