The title of this post is a question that we’ve been asking ourselves since we started Retraction Watch in August, and that others have asked us since. And we’ve gotten different answers depending where we look:
- In our first post, we cited a study that found 328 retractions in Medline in the decade from 1995 to 2004.
- A study by Elizabeth Wager and Peter Williams, which we cited in an early post, found 529 between 1988 and 2008.
- A 2009 analysis by Thomson Scientific, at the request of Times Higher Education, found 95 in 2008.
So the real number is a) probably somewhere between 30 and 95 and b) increasing — which isn’t as precise as we’d like, but is hardly the fault of the various people who’ve tried valiantly to count.
Well, we may be a step closer to precision, sort of. A study published today in the Journal of Medical Ethics found 788 retractions from 2000 to 2010, which means the number is something in the 70s. We say “something in the 70s” since it’s not clear whether from the paper whether 2000-2010 was 10 years or closer to 11. [See update at end of post.]
The fact that the number is probably higher than 30 is not surprising to us, since it squares with our highly un-representative sample of 42 we’ve reported on over the past three-plus months. Annualized, that would be about 150, although it’s impossible to say whether we hit a particularly high rate as we started blogging.
Nature’s Richard van Noorden — who alerted us to the study — took apart one particular claim in the paper in a blog post that’s worth a read. van Noorden was puzzled by this line: “American scientists are significantly more prone to engage in data fabrication or falsification than scientists from other countries.” As van Noorden notes, the paper’s author, Grant Steen, had based on the fact that
retractions by US authors have a high fraud-to-error ratio (a third of US retractions were due to fraud rather than some sort of mistake).
But this does not mean that any US scientist is more likely to engage in data fraud than a researcher from another country. Indeed, a check on PubMed publications versus retractions for frauds suggests that s/he may be less likely to do so (though the statistical significance of this finding has not yet been tested).
Read van Noorden’s post for that analysis.
Our own “expression of concern” — we don’t mean that in an official way, and we genuinely appreciate any effort to quantify retractions — was over this line: “13 papers were retracted because of an error at a journal office.” We find that hard to believe, since we’ve already unearthed five such cases in less than four months — read about them here, here, and here. So 13 over the preceding 10 years seems too few. But like van Noorden, we’ll have to reserve judgment until we’ve done a more rigorous analysis.
Will we have to retract that concern?
Update, 10:50 a.m. Eastern, 11/16/10: Steen responded to a few questions we sent him by email:
I found 788 retractions listed on PubMed for the calendar years 2000 to 2009 (10 years in total). Not surprisingly, few papers published in 2009 were retracted in this sample, but a large number of papers from prior years were retracted in 2009.
The rate of retraction definitely increased year-by-year over the period I evaluated. In your blog you quote James Parry, acting head of the UK Research Integrity Office (UKRIO), as saying that the increasing rate of retraction “might reflect a real increase in misconduct or, more likely, an increase in detection compared with 20 years ago.” This is probably true, as plagiarism-detection software has had a fairly major impact at some journals. But there’s another factor to consider; editors may be reaching farther back in time to retract, compared to years past, in an effort to purge the literature of false science.