Thanks for your comment. I’ve been aware than this perspective is prevalent, but I haven’t actually seen examples of where replication of the same study has been attempted, I have only seen some that introduce other major factors that one would expect to influence results. The link you sent me to criticises priming in a broad way, pointing to heuristics like the effect being too large to be believable, which seems a pretty subjective judgment.
The link specifically criticises Danny Kahneman for using priming in small studies to make large generalisations, and in Kahneman’s response he makes a fairly good rebuttal. The one thing he concedes is the small size of the studies he used, which is not the case in the priming case used in this post, which involved a series of studies with several hundred participants each.
I appreciate that I might be incorrect to have confidence in these studies, in light of the widely held opinion that priming studies are not reliable, but I’m yet to see the specific studies that have attempted, and failed, to replicate these specific studies.
The link I sent also discusses an article that meta-analyzed replications of studies using scarcity priming. The meta-analysis includes a failed replication of a key study from the Mani et al (2013) article you discuss in your post.
The Mani article itself has the hallmarks of questionable research practices. It’s true that each experiment has about 100 participants, but given that these participants are split across 4 conditions, this is the bare minimum for the standard (n = 20-30 / group) at that time. The main results also have p-values between .01-.05, which is an indicator of p-hacking. And yes, the abnormally large effect sizes are relevant. An effect as large as is claimed by Mani et al (d=.88-.94) should be glaringly obvious. That’s close to the effect size for the association between height and weight (r = .44 → d = .98)
And more generally at this point, the default view should be that priming studies are not credible. One shouldn’t wait for a direct failed replication of any particular study. There’s enough indirect evidence that that whole approach is beset by bad practices.
Thanks for providing that link Nathan, that does seem to significantly undermine the Mani et al study. While I agree with you that at this point primary studies should be, by default, seen as not credible, it did help a lot (in terms of convincing me personally) to see a study specifically designed to replicate the Mani et al study.
Do you find any evidence for the conclusions in the post credible? Or are you aware of more credible studies that would support the argument the post makes? You seem to know your stuff, so, I don’t want to waste your time, but I would value your input about whether the post’s position is tenable at all given the available evidence.
I don’t want to be posting nonsense, so depending on the evidence available I would either rewrite it with more reliable evidence, or take it down.
I’m familiar with psychology. But the causes and consequences of poverty are beyond my expertise.
In general, I think the case for alleviating poverty doesn’t need to depend on what it does to people’s cognitive abilities. Alleviating poverty is good because poverty sucks. People in poverty have worse medical care, are less safe, have less access to quality food, etc. If someone isn’t moved by these things, then saying it also lowers IQ is kind of missing the point.
Another theme in your post is that those in poverty aren’t to blame, since it was the poverty that caused them to make their bad decisions. I think a stronger case can be made by pointing to the fact that people don’t choose where they’re born. (And this fact doesn’t depend on any dubious psychology studies.) For someone in Malawi, it will be hard to think about saving for retirement when you make $5/day.
Thanks for your comment. I’ve been aware than this perspective is prevalent, but I haven’t actually seen examples of where replication of the same study has been attempted, I have only seen some that introduce other major factors that one would expect to influence results. The link you sent me to criticises priming in a broad way, pointing to heuristics like the effect being too large to be believable, which seems a pretty subjective judgment.
The link specifically criticises Danny Kahneman for using priming in small studies to make large generalisations, and in Kahneman’s response he makes a fairly good rebuttal. The one thing he concedes is the small size of the studies he used, which is not the case in the priming case used in this post, which involved a series of studies with several hundred participants each.
I appreciate that I might be incorrect to have confidence in these studies, in light of the widely held opinion that priming studies are not reliable, but I’m yet to see the specific studies that have attempted, and failed, to replicate these specific studies.
The link I sent also discusses an article that meta-analyzed replications of studies using scarcity priming. The meta-analysis includes a failed replication of a key study from the Mani et al (2013) article you discuss in your post.
The Mani article itself has the hallmarks of questionable research practices. It’s true that each experiment has about 100 participants, but given that these participants are split across 4 conditions, this is the bare minimum for the standard (n = 20-30 / group) at that time. The main results also have p-values between .01-.05, which is an indicator of p-hacking. And yes, the abnormally large effect sizes are relevant. An effect as large as is claimed by Mani et al (d=.88-.94) should be glaringly obvious. That’s close to the effect size for the association between height and weight (r = .44 → d = .98)
And more generally at this point, the default view should be that priming studies are not credible. One shouldn’t wait for a direct failed replication of any particular study. There’s enough indirect evidence that that whole approach is beset by bad practices.
Thanks for providing that link Nathan, that does seem to significantly undermine the Mani et al study. While I agree with you that at this point primary studies should be, by default, seen as not credible, it did help a lot (in terms of convincing me personally) to see a study specifically designed to replicate the Mani et al study.
Do you find any evidence for the conclusions in the post credible? Or are you aware of more credible studies that would support the argument the post makes? You seem to know your stuff, so, I don’t want to waste your time, but I would value your input about whether the post’s position is tenable at all given the available evidence.
I don’t want to be posting nonsense, so depending on the evidence available I would either rewrite it with more reliable evidence, or take it down.
I’m familiar with psychology. But the causes and consequences of poverty are beyond my expertise.
In general, I think the case for alleviating poverty doesn’t need to depend on what it does to people’s cognitive abilities. Alleviating poverty is good because poverty sucks. People in poverty have worse medical care, are less safe, have less access to quality food, etc. If someone isn’t moved by these things, then saying it also lowers IQ is kind of missing the point.
Another theme in your post is that those in poverty aren’t to blame, since it was the poverty that caused them to make their bad decisions. I think a stronger case can be made by pointing to the fact that people don’t choose where they’re born. (And this fact doesn’t depend on any dubious psychology studies.) For someone in Malawi, it will be hard to think about saving for retirement when you make $5/day.