Super interesting, thanks for sharing! I have some possibly dumb questions about the finding that these programs don’t change behaviors:
How confident are you that you’re not seeing changes in behavior because of a lack of effect, vs small sample size / people lying in the data / other issues?
When you say these studies found no effect on “behaviors”, how were these measured generally speaking? (Self report, or something else?)
Our summary doesn’t cover this much, but the paper discusses measurement error a lot, because it’s a serious problem. Essentially everything in this dataset is self-reported.
It’s very plausible that people are misreporting what they do and what happens to them, and they might be doing so in ways that relate to treatment status. In other words, violence prevention programs might change how subjects understand and describe their lives. There is a dramatic illustration of this in Sexual Citizens where a young man realizes, in a research setting, that he once committed an assault. If this happens en masse, then treated subjects would be systematically more likely to report violence, and any true reductions would be harder to detect because of counteracting differences in reporting.
On the other hand, the median study in this literature lectures subjects on why rape myths are bad, and then asks them how much they endorse rape myths. There’s reason to think that people aren’t being entirely honest when they reply.
Beliefs about the magnitude of these biases—downward bias for behavioral measures, upward bias for ideas-based measures—are, unfortunately, very hard to quantify in this literature b/c nothing (to my knowledge) compares and contrasts self-reported outcomes with objective measures. But speaking for myself, I think the ideas-based outcomes are pretty inflated, and the behavioral outcomes are probably noisy rather than biased, so I believe our overall null on behavioral outcomes.
But yes, we need some serious work on the measurement front, IMO.
Nice. Thanks for an incredibly prompt and thorough response!
But speaking for myself, I think the ideas-based outcomes are pretty inflated, and the behavioral outcomes are probably noisy rather than biased, so I believe our overall null on behavioral outcomes.
That makes sense to me. It is kind of interesting to me that the zeitgeist programs you mention are so different in terms of intervention size (if I’m understanding correctly, Safe Dates involved a 10 session curriculum, but the Men’s Program involved a single session with a short video?), but neither seem effective at behavior reduction!
Super interesting, thanks for sharing! I have some possibly dumb questions about the finding that these programs don’t change behaviors:
How confident are you that you’re not seeing changes in behavior because of a lack of effect, vs small sample size / people lying in the data / other issues?
When you say these studies found no effect on “behaviors”, how were these measured generally speaking? (Self report, or something else?)
Great questions!
Our summary doesn’t cover this much, but the paper discusses measurement error a lot, because it’s a serious problem. Essentially everything in this dataset is self-reported.
The few exceptions are typically either involvement outcomes or actually somewhat bizarre, for instance, male subjects watch a video on sexual harassment and then teach a female confederate how to golf, while a researcher watches through a one-way mirror and codes how often the subject touches the confederate and how sexually harassing those touches were.
Perpetration and Victimization were measured with the Sexual Experiences Survey and ideas-based outcomes are typically measured with the Illinois Rape Myth Acceptance Scale.
It’s very plausible that people are misreporting what they do and what happens to them, and they might be doing so in ways that relate to treatment status. In other words, violence prevention programs might change how subjects understand and describe their lives. There is a dramatic illustration of this in Sexual Citizens where a young man realizes, in a research setting, that he once committed an assault. If this happens en masse, then treated subjects would be systematically more likely to report violence, and any true reductions would be harder to detect because of counteracting differences in reporting.
On the other hand, the median study in this literature lectures subjects on why rape myths are bad, and then asks them how much they endorse rape myths. There’s reason to think that people aren’t being entirely honest when they reply.
Beliefs about the magnitude of these biases—downward bias for behavioral measures, upward bias for ideas-based measures—are, unfortunately, very hard to quantify in this literature b/c nothing (to my knowledge) compares and contrasts self-reported outcomes with objective measures. But speaking for myself, I think the ideas-based outcomes are pretty inflated, and the behavioral outcomes are probably noisy rather than biased, so I believe our overall null on behavioral outcomes.
But yes, we need some serious work on the measurement front, IMO.
By the way, I ended up buying a copy of Sexual Citizens because of this comment. I found it super interesting (if sad :( ), thanks for the rec!
Glad you liked it! I also got a lot out of Jia Tolentino’s “We Come From Old Virginia” in her book Trick Mirror
Nice. Thanks for an incredibly prompt and thorough response!
That makes sense to me. It is kind of interesting to me that the zeitgeist programs you mention are so different in terms of intervention size (if I’m understanding correctly, Safe Dates involved a 10 session curriculum, but the Men’s Program involved a single session with a short video?), but neither seem effective at behavior reduction!
wow… oof...
Thank you for this summary + for conducting this research!
My pleasure and thank you! (I was able to mostly cut and paste my response from something else I was working on, FWIW)