Why did you chose 1986 as a starting point? Attitudes about sexual violence seem to have changed a lot since then, so I wonder if the potential staleness of the older studies outweighs the value of having more studies for the analysis. [Finding no meaningful differences based on study age would render this question moot.]
👋 our search extends to 1985, but the first paper was from 1986. We started our search by replicating and extending a previous review, which says “The start date of 1985 was chosen to capture the 25-year period prior to the initial intended end date of 2010. The review was later extended through May 2012 to capture the most recent evaluation studies at that time.” I’m not too worried about missing stuff from before that, though, because the first legit evaluation we could find was from 1986. There’s actually a side story to tell here about how the people doing this work back then were not getting supported by their fields or their departments, but persisted anyway.
But I think your concern is, why include studies from that far back at all vs. just the “modern era” however we define that (post MeToo? post Dear Colleague Letter?). That’s a fair question, but your intuition about mootness is right, there’s essentially zero relationship between effect size and time.
Here’s a figure that plots average effect size over time from our 4-exploratory-analyses.html script:
And the overall slope is really tiny:
dat |> sum_lm(d, year)
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) -5.67062 4.85829 -1.16720 0.24370
## year 0.00297 0.00242 1.22631 0.22067
Yes, that was the question, and this is a helpful response.
I have no opinion on what the right cutoff would be if the slope were meaningfully non-zero, as there is no clear way to define the “modern” era. Perhaps I would have sliced the data with various cutoffs (e.g., 1985, 1990, 1995 . . .) and given partial credence to each resulting analysis?
Yeah, I was curious about this too, and we try to get at something theoretically similar by putting out all the “zeitgeist” studies in an attempt to define the dominant approaches of a given era. Like, in the mid-2010s, everyone was thinking about bystander stuff. But if memory serves, once I saw the above graph, I basically just dropped this whole line of inquiry because we were basically seeing no relationship between effect size and publication date. Having said that, behavioral outcomes get more common over time (see graph in original post), and that is probably also having a depressing effect on the relationship. There could be some interesting further analyses here—we try to facilitate them by open sourcing our materials.
By the way, apologies for saying above that your “intuition is moot,” I meant “your intuition about mootness is correct” 😃 (I just changed it)
Why did you chose 1986 as a starting point? Attitudes about sexual violence seem to have changed a lot since then, so I wonder if the potential staleness of the older studies outweighs the value of having more studies for the analysis. [Finding no meaningful differences based on study age would render this question moot.]
👋 our search extends to 1985, but the first paper was from 1986. We started our search by replicating and extending a previous review, which says “The start date of 1985 was chosen to capture the 25-year period prior to the initial intended end date of 2010. The review was later extended through May 2012 to capture the most recent evaluation studies at that time.” I’m not too worried about missing stuff from before that, though, because the first legit evaluation we could find was from 1986. There’s actually a side story to tell here about how the people doing this work back then were not getting supported by their fields or their departments, but persisted anyway.
But I think your concern is, why include studies from that far back at all vs. just the “modern era” however we define that (post MeToo? post Dear Colleague Letter?). That’s a fair question, but your intuition about mootness is right, there’s essentially zero relationship between effect size and time.
Here’s a figure that plots average effect size over time from our
4-exploratory-analyses.html
script:And the overall slope is really tiny:
Yes, that was the question, and this is a helpful response.
I have no opinion on what the right cutoff would be if the slope were meaningfully non-zero, as there is no clear way to define the “modern” era. Perhaps I would have sliced the data with various cutoffs (e.g., 1985, 1990, 1995 . . .) and given partial credence to each resulting analysis?
Yeah, I was curious about this too, and we try to get at something theoretically similar by putting out all the “zeitgeist” studies in an attempt to define the dominant approaches of a given era. Like, in the mid-2010s, everyone was thinking about bystander stuff. But if memory serves, once I saw the above graph, I basically just dropped this whole line of inquiry because we were basically seeing no relationship between effect size and publication date. Having said that, behavioral outcomes get more common over time (see graph in original post), and that is probably also having a depressing effect on the relationship. There could be some interesting further analyses here—we try to facilitate them by open sourcing our materials.
By the way, apologies for saying above that your “intuition is moot,” I meant “your intuition about mootness is correct” 😃 (I just changed it)