I’d be interested in seeing data on the distribution of causes of retraction and how it’s changed over time. I know RetractionWatch likes to say that scientists tend to underestimate the proportion of retractions that are down to fraud. I do think some (many?) retractions are due to serious technical errors with no implication of deliberate fraud or misconduct. I suspect RetractionWatch has data on this.
I’m not claiming that it’s inevitably true that more retractions indicates better community epistemics, but I do think it’s a big part of the story in this case. A paper retraction requires someone to notice that the paper is worthy of retraction, bring that to the editors and, very often, put a lot of pressure on the editors to retract the paper (who are usually extremely reluctant to do so). That requires people to be on the lookout for things that might need to be retracted and willing to put in the time and effort to get it retracted.
In the past this was very rare, and only extremely flagrant fraud or misconduct (or unusually honest scientists retracting their own work) led to retractions. Now, partly as a side consequence of the replication crisis but also more general (and incomplete) changes in norms, we have a lot more people who spend a lot of time actively searching for data manipulation and other retraction-worthy things in papers.
This is just the science version of the common claim that a recorded increase (or decrease) in the rate of a particular crime, or a particular mental disorder, or some such, is mainly due to changes in how closely we’re looking for it.
Thanks Gavin.
I’d be interested in seeing data on the distribution of causes of retraction and how it’s changed over time. I know RetractionWatch likes to say that scientists tend to underestimate the proportion of retractions that are down to fraud. I do think some (many?) retractions are due to serious technical errors with no implication of deliberate fraud or misconduct. I suspect RetractionWatch has data on this.
I’m not claiming that it’s inevitably true that more retractions indicates better community epistemics, but I do think it’s a big part of the story in this case. A paper retraction requires someone to notice that the paper is worthy of retraction, bring that to the editors and, very often, put a lot of pressure on the editors to retract the paper (who are usually extremely reluctant to do so). That requires people to be on the lookout for things that might need to be retracted and willing to put in the time and effort to get it retracted.
In the past this was very rare, and only extremely flagrant fraud or misconduct (or unusually honest scientists retracting their own work) led to retractions. Now, partly as a side consequence of the replication crisis but also more general (and incomplete) changes in norms, we have a lot more people who spend a lot of time actively searching for data manipulation and other retraction-worthy things in papers.
This is just the science version of the common claim that a recorded increase (or decrease) in the rate of a particular crime, or a particular mental disorder, or some such, is mainly due to changes in how closely we’re looking for it.