As readers of this blog have no doubt sensed by now, the number of retractions per year seems to be on the rise. We feel that intuitively as we uncover more and more of them, but there are also data to suggest this is true.
As if to demonstrate that, we’ve been trying to find time to write this post for more than a week, since the author of the study we’ll discuss sent us his paper. Writing about all the retractions we learned about, however, kept us too busy.
But given how sharp Retraction Watch readers are, you will be quick to note that more retractions doesn’t necessarily mean a higher rate. After all, there were about 527,000 papers published in 2000, and 852,000 published in 2009, so a constant rate of retractions would still mean a higher number. Here’s what Grant Steen, who published a paper on retractions and fraud last month in the Journal of Medical Ethics, found when he ran those numbers:
The rate of increase in retractions is greater than the rate of increase in publications, although the two are correlated.
That squares with what you’ll find by looking at Neil Saunders’ lovely plot of publications vs. retractions since 1977. There were approximately eight times as many retractions per 100,000 papers published in 2009 as there were in 2000. Important to note: Saunders’ data set has the same number of papers published, but more retractions in each year. When we find the reason for that discrepancy, we’ll post an update.
There’s no arguing that there is an increase, however. One possible reason for it is that journals are, as Steen puts it, “reaching farther back in time to retract published papers.” The average length of time between publication and retraction is growing. In fact, it took nine years to retract one particular paper in 2009.
Steen queries the data to see if research fraud is increasing, which he hints is the case:
It is particularly striking that the number of papers retracted for fraud increased more than sevenfold in the 6 years between 2004 and 2009.
The paper differentiates fraud — data fabrication or falsification — from error, which includes plagiarism, self-plagiarism, duplication, and scientific mistakes. Here’s how those reasons break down:
Error is more common than fraud; 73.5% of papers were retracted for error (or an undisclosed reason) whereas 26.6% of papers were retracted for fraud (table 1). The single most common reason for retraction was a scientific mistake, identified in 234 papers (31.5%). Fabrication, which includes data plagiarism, was more common than text plagiarism. Multiple reasons for retraction were cited for 67 papers (9.0%), but 134 papers (18.1%) were retracted for ambiguous reasons
As Steen notes:
Very recently, a downturn in mistakes occurred at the same time as an upturn in fraud (figure 2), which may mean that some fraud in the past was excused as a scientific mistake whereas fraud is now more likely to be admitted as such. We note that some excuses given for a mistake seem implausible and no excuse at all is given for other retractions; it seems likely that at least some retractions for mistakes or for unstated reasons actually represent fraudulent papers.
It’s also possible, writes Steen, that journals “are making a far more aggressive effort to self-police now than in the recent past.” Or it may be that “repeat offenders” such as Jan Hendrick Schon and Scott Reuben — who together make up a staggering 14% of the retractions from 2000 to 2009 — are skewing the data. (The story of another such repeat offender, Naoki Mori, seems to be unfolding as we speak. The count is up to ten.)
Still, Steen concludes that
Levels of misconduct appear to be higher than in the past.
But it’s somewhat difficult to know for sure, as Steen acknowledges, because
8% of retractions were for unstated reasons and up to 18% of retractions were for ambiguous reasons.
That lack of transparency resonated with us, as did another data point that Richard van Noorden, over at Nature‘s The Great Beyond blog, picked up on:
Nearly a third of retracted papers remain at journal websites and do not have their withdrawn status flagged…
Specifically, 31.8% of papers were neither watermarked nor marked as retracted at their abstract pages. We’ve called for various forms of transparency before, including press-releasing retractions and writing clearer notices. But marking retracted papers to begin with is also necessary, and we’re disappointed to see journals doing this badly.
American scientists are significantly more prone to engage in data fabrication or falsification than scientists from other countries.
It’s important to note that the data set Steen is working with for both papers is only of English language retractions.
In our post, we noted that Steen’s November piece found 13 retractions for journal office error. We thought that was a bit low, given that we had already found five in four months of Retraction Watch, here, here, and here. In the new paper, he found 27, which still seems low but is probably closer to recent figures.
Regardless of the actual number, we wholeheartedly agree that those “journal office error” retractions demonstrate that
…retraction is a very blunt instrument used for offences both gravely serious and trivial.
And we obviously also agree that
Given that retraction of a paper is the harshest possible punishment for a scientist, we must work to assure that it is applied fairly.