I really appreciate you looking into this topic. I think you want to have much much bigger error bars on these, however. Interventions like this are known to have massive selection effects and difficulty with determining causality—giving point estimates is kind of sweeping under the rug the main thing that I’m interested in regarding whether these interventions work.
For example, ACE had a problem similar to this when it was beginning. For one of the charities, they relied on survey data to look for an effect and gave estimates of how effective interventions were based on this, but all of the interesting question was basically “whether we should believe at all the type of conclusion they drew from the surveys”. In the end, of course the answer was no.
I didn’t read the whole post but the reasoning in the summary and early sections seemed to be centered around point estimates and taking-data-at-face-value. The type of analysis that would convince me to change my actions here would be reliability analysis, seeking to show any place within this domain that has extremely clear support for a real effect. By default this basically doesn’t exist for social interventions ime, so the conclusions are unfortunately more affected by the vagaries of the input data rather than the underlying reality.
I think these are valuable comments, and you are absolutely correct. Limited time meant that I (1) was very short-hand in how I aggregated effect sizes/results from academic studies, (2) used simplistic point estimates. Ideally, I would have done a meta-analysis style method with risk of bias assessment etc. My main limitation is a frustrating one- time.
I did try and caveat that with trying to make all my shorthands and uncertainties explicit, but I dont think I quite succeeded at that.
One area I would push back on is the comments regarding social interventions and survey data- the methods in most/all these studies are survey effects asking women wehther they have experienced violence in the last year. To me, this seems pretty robust, and as long as the surveys are conducted to a high standard with low risk of bias (which most of the studies have dedicated sections to explain how they tried to do this, to varying degrees of success), think this is credible and internally valid data.
I really appreciate you looking into this topic. I think you want to have much much bigger error bars on these, however. Interventions like this are known to have massive selection effects and difficulty with determining causality—giving point estimates is kind of sweeping under the rug the main thing that I’m interested in regarding whether these interventions work.
For example, ACE had a problem similar to this when it was beginning. For one of the charities, they relied on survey data to look for an effect and gave estimates of how effective interventions were based on this, but all of the interesting question was basically “whether we should believe at all the type of conclusion they drew from the surveys”. In the end, of course the answer was no.
I didn’t read the whole post but the reasoning in the summary and early sections seemed to be centered around point estimates and taking-data-at-face-value. The type of analysis that would convince me to change my actions here would be reliability analysis, seeking to show any place within this domain that has extremely clear support for a real effect. By default this basically doesn’t exist for social interventions ime, so the conclusions are unfortunately more affected by the vagaries of the input data rather than the underlying reality.
Hi cflexman,
I think these are valuable comments, and you are absolutely correct. Limited time meant that I (1) was very short-hand in how I aggregated effect sizes/results from academic studies, (2) used simplistic point estimates. Ideally, I would have done a meta-analysis style method with risk of bias assessment etc. My main limitation is a frustrating one- time.
I did try and caveat that with trying to make all my shorthands and uncertainties explicit, but I dont think I quite succeeded at that.
One area I would push back on is the comments regarding social interventions and survey data- the methods in most/all these studies are survey effects asking women wehther they have experienced violence in the last year. To me, this seems pretty robust, and as long as the surveys are conducted to a high standard with low risk of bias (which most of the studies have dedicated sections to explain how they tried to do this, to varying degrees of success), think this is credible and internally valid data.