Retraction Watch readers will no doubt be familiar with the fact that retraction rates are rising, but one of the unanswered questions has been whether that increase is due to more misconduct, greater awareness, or some combination of the two.
In a new paper in PLOS Medicine, Daniele Fanelli, who has studied misconduct and related issues, tries to sift through the evidence. Noting that the number of corrections has stayed constant since 1980, Fanelli writes that:
If the recent growth of retractions were being driven by an increasing propensity of researchers to “cut corners,” we would expect minor infractions, and therefore the frequency of published errata, to increase just as fast as, if not faster than that of retractions.
Fanelli also finds that the proportion of journals retracting papers has increased, while the number of retractions in each of those journals has remained the same. Taken with the fact that the number of cases of misconduct found by the Office of Research Integrity (ORI) has not decreased, Fanelli concludes:
Data from the [Web of Science] database and the ORI offer strong evidence that researchers and journal editors have become more aware of and more proactive about scientific misconduct, and provide no evidence that recorded cases of fraud are increasing, at least amongst US federally funded research. The recent rise in retractions, therefore, is most plausibly the effect of growing scientific integrity, rather than growing scientific misconduct.
Hence the “(mostly) a good sign” in the title.
Fanelli singles out retraction notices for discussion:
An unjustified stigma currently surrounds retractions, and the opaqueness of many retraction notices betrays improper feelings of embarrassment . Nearly 60% of retraction notices linked to misconduct only mention error, loss of data or replication failure, and less than one-third point to a specific ethical problem . Editors writing these notices often use ambiguous euphemisms in place of technical definitions of misconduct, perhaps to prevent legal actions (see www.retractionwatch.com). Although retraction notices are becoming more transparent, many journals still lack clear policies for misconduct and retraction, and existing policies are applied inconsistently [19,20,21]. It is worth pointing out that journals with a high impact factor are more likely to have clear policies for scientific misconduct [22,23]. This datum offers a simple, and largely overlooked, explanation for the correlation observed between journal impact factor and retraction frequency, which instead is usually attributed to higher scrutiny and higher prevalence of fraudulent papers in top journals [1,7].
We asked Ferric Fang, who has of course studied retractions and is more of a proponent of the growing misconduct hypothesis, for his take:
The increasing number of retracted articles often raises the question of whether this reflects more misconduct or greater scrutiny. Fanelli’s recent paper is an interesting attempt to answer this question. However I have some concerns about the study’s methodology and conclusions. One line of evidence is that records marked as “correction” in the Web of Science have been stable since the 1970s. However corrections are frequently of a very minor nature, and it is conceivable that a significant increase in misconduct would be masked by a large pool of corrections that are trivial in nature or the result of honest error. I have also learned, on the basis of studies that I performed with Arturo Casadevall and Grant Steen, to be careful about relying on journal retraction and correction notices.
A second line of evidence advanced by Fanelli is that the ORI caseload has been stable. I have puzzled over this fact as well. However, it is important to recognize that the ORI was only formed in 1992, and the rise in the rate of retractions had already begun prior to that time (Fang et al. PNAS 109:17028, 2012). Furthermore, Fanelli examined the ORI caseload from 1994-2011, but John Dahlberg reports that the ORI caseload has sharply risen in 2012-2013.
Given all that, Fang has a somewhat different conclusion than Fanelli:
I agree with Fanelli that retractions are an imperfect reflection of research misconduct and that attempts to extrapolate from retractions to the scientific enterprise at large must be made cautiously. Nevertheless, the detailed story behind each retraction has revealed useful insights into why scientists may be tempted to engage in misconduct and how journals and institutions may inadvertently encourage or enable this behavior. The work of Retraction Watch has been particularly important in this regard. I also agree that growing efforts by journals and scientists to correct the scientific record should be strongly encouraged.
My own view is that changes in both author and journal behavior have contributed to the rise in retractions (Steen et al. PLoS One 8:e68397, 2013). Fanelli makes a case for an increasing propensity by journals to retract invalid papers, with which I can surely agree. However this does not exclude the possibility that misconduct is also increasing. In fact the nearly ten-fold rise in the percentage of articles retracted for fraud or suspected fraud since 1980 suggests that either journals failed to detect and retract 90% of fraudulent work prior to 1980, or that such work is more common today. I think the latter is more likely. With studies like that of Martinson et al. showing that a third or more of scientists admit to questionable research practices (Nature 435:737, 2005), I am not sure that I can share Fanelli’s sanguine view that the increase in retractions is a good sign.