We often hear — with data to back the statement — that top-tier journals, ranked by impact factor, retract more papers than lower-tier journals. For example, when Murat Cokol and colleagues compared journals’ retraction numbers in EMBO Reports in 2007, as Nature noted in its coverage of that study (h/t Richard van Noorden):
Journals with high impact factors retract more papers, and low-impact journals are more likely not to retract them, the study finds. It also suggests that high- and low-impact journals differ little in detecting flawed articles before they are published.
One thing you notice when you look at Cokol et al’s plots is that although their models seem to take retractions “per capita” — in other words per study published — into account, they don’t report those figures.
Enter a paper published this week in Infection and Immunity (IAI) by Ferric Fang and Arturo Casadevall, “Retracted Science and the Retraction Index.” Fang, the editor of IAI, takes scientific integrity and retractions very seriously. He’s made his thinking on these issues clear every time we’ve asked, and was part of the review of the the Naoki Mori case that led to a 10-year ban on Mori publishing in American Society of Microbiology journals (including IAI).
For their IAI paper, Fang and Casadevall searched PubMed for retractions in 17 journals whose impact factor ranged from 2 to 53.5 (New England Journal of Medicine was at the top). For those of you unfamiliar, here’s how the impact factor — derived by Thomson Scientific’s Web of Knowledge — is calculated:
The journal Impact Factor is the average number of times articles from the journal published in the past two years have been cited in the [Journal Citation Reports] year.
The Impact Factor is calculated by dividing the number of citations in the JCR year by the total number of articles published in the two previous years. An Impact Factor of 1.0 means that, on average, the articles published one or two year ago have been cited one time. An Impact Factor of 2.5 means that, on average, the articles published one or two year ago have been cited two and a half times. Citing articles may be from the same journal; most citing articles are from different journals.
Fang and Casadevall’s “retraction index” was a simple calculation: They took the number of retractions in the journal from 2001 to 2010, multiplied by 1000, and divided by the number of published articles with abstracts.
They then plotted the retraction index against the impact factor. Here’s that plot (right click to enlarge and open in a separate browser window):
As you can see, that plot
revealed a surprisingly robust correlation between the journal retraction index and its impact factor (p < 0.0001 by Spearman rank correlation). Although correlation does not imply causality, this preliminary investigation suggests that the probability that an article published in a higher journal will be retracted is higher than that of an article published in a lower impact journal.
The findings also aren’t dissimilar to what Cokol et al found. Fang and Casadevall offer a number of potential explanations for what they learned. Not surprisingly, given how outspoken Fang has been about misconduct and retractions, the authors pull no punches. For example, there is pressure to make results fit into a clean narrative:
In contradistinction to the crisp, orderly results of a typical manuscript in a high impact journal, the reality of everyday science is often a messy affair littered with non-reproducible experiments, outlier data points, unexplained results and observations that fail to fit into a neat story. In such situations, desperate authors may be enticed to take short cuts, withhold data from the review process, over-interpret results, manipulate images, and engage in behavior ranging from questionable practices to outright fraud (36).
So if journals are eager to trumpet their high impact factor, shouldn’t they also be willing, in the name of transparency, to let the world know how frequently papers are retracted? Maybe alongside every “New Impact Factor: 7.8” on a journal’s site should be “Retraction Index: 2.3.”
Fang and Casadevall’s paper — which includes commentary on how explicit retraction notices should be — should be required reading for anyone interested in scientific integrity.
It also mentions a blog we should probably check out:
Last year, the journalists Ivan Oransky and Adam Marcus launched a blog called “Retraction Watch,” which is devoted to the examination of retracted articles “as a window into the scientific process” (63); sadly, they seem to have no trouble finding material.