The title of this post is a question that we’ve been asking ourselves since we started Retraction Watch in August, and that others have asked us since. And we’ve gotten different answers depending where we look:
- In our first post, we cited a study that found 328 retractions in Medline in the decade from 1995 to 2004.
- A study by Elizabeth Wager and Peter Williams, which we cited in an early post, found 529 between 1988 and 2008.
- A 2009 analysis by Thomson Scientific, at the request of Times Higher Education, found 95 in 2008.
So the real number is a) probably somewhere between 30 and 95 and b) increasing — which isn’t as precise as we’d like, but is hardly the fault of the various people who’ve tried valiantly to count.
Well, we may be a step closer to precision, sort of. A study published today in the Journal of Medical Ethics found 788 retractions from 2000 to 2010, which means the number is something in the 70s. We say “something in the 70s” since it’s not clear whether from the paper whether 2000-2010 was 10 years or closer to 11. [See update at end of post.]
The fact that the number is probably higher than 30 is not surprising to us, since it squares with our highly un-representative sample of 42 we’ve reported on over the past three-plus months. Annualized, that would be about 150, although it’s impossible to say whether we hit a particularly high rate as we started blogging.
Nature’s Richard van Noorden — who alerted us to the study — took apart one particular claim in the paper in a blog post that’s worth a read. van Noorden was puzzled by this line: “American scientists are significantly more prone to engage in data fabrication or falsification than scientists from other countries.” As van Noorden notes, the paper’s author, Grant Steen, had based on the fact that
retractions by US authors have a high fraud-to-error ratio (a third of US retractions were due to fraud rather than some sort of mistake).
But this does not mean that any US scientist is more likely to engage in data fraud than a researcher from another country. Indeed, a check on PubMed publications versus retractions for frauds suggests that s/he may be less likely to do so (though the statistical significance of this finding has not yet been tested).
Read van Noorden’s post for that analysis.
Our own “expression of concern” — we don’t mean that in an official way, and we genuinely appreciate any effort to quantify retractions — was over this line: “13 papers were retracted because of an error at a journal office.” We find that hard to believe, since we’ve already unearthed five such cases in less than four months — read about them here, here, and here. So 13 over the preceding 10 years seems too few. But like van Noorden, we’ll have to reserve judgment until we’ve done a more rigorous analysis.
Will we have to retract that concern?
Update, 10:50 a.m. Eastern, 11/16/10: Steen responded to a few questions we sent him by email:
I found 788 retractions listed on PubMed for the calendar years 2000 to 2009 (10 years in total). Not surprisingly, few papers published in 2009 were retracted in this sample, but a large number of papers from prior years were retracted in 2009.
The rate of retraction definitely increased year-by-year over the period I evaluated. In your blog you quote James Parry, acting head of the UK Research Integrity Office (UKRIO), as saying that the increasing rate of retraction “might reflect a real increase in misconduct or, more likely, an increase in detection compared with 20 years ago.” This is probably true, as plagiarism-detection software has had a fairly major impact at some journals. But there’s another factor to consider; editors may be reaching farther back in time to retract, compared to years past, in an effort to purge the literature of false science.
Medline isn’t the world and neither is WoS! There are thousands of scientific journals not covered by Medline – all with the potential of retractions. WoS at least covers all research areas, but still only about 12k journals. I would say WoS is a lower bound.
You can search PubMed using the query “Retraction of Publication[Publication Type]”. This currently returns 1621 results. I count 1091 from 2000-2009; here’s a graph.
NSaunders – this (a PubMed query on retractions in the English language) is what I understand the original paper did, it’s just that there were only 788 retractions published from 2000 to the end of 2009 at the time Steen queried the database (22 January 2010). Obviously a lot more now…
Update: NSaunders – you are actually searching for retraction notices. Whereas Steen searched for retracted articles, using a different query (Retracted Publication [publication type]). Hence the difference. Just worked this out. There are still only 1090-odd retracted notices from 2000 to 2009; but the number of articles published in that time and subsequently retracted has gone up to 1105.
You can view this as an increase in fraud or as an increase in reporting of problems with studies. Which is it?
That is really hard to estimate. My impression is that both fraud and reports on fraud are on the rise. I also think that there is a growing awareness that people tend to take so-called shortcuts because of pressure, ambition, and craving for recognition; searching PubMed for retract* with “limits” set to the Title field will yield several instances of retractions originating from the labs where the original paper came from (apart from the bulk of Mori, Boldt, and Bulfone-Paus retractions which clutter the first pages, too).
Perhaps many PIs realize that the time has come to go through published manuscripts and check the original data before others find irregularities. Or simply because increasing public awareness of fraud made them remember that checking is better than trusting. After all, it is the PIs’ responsibility to ensure the integrity of published data.