Majority of retractions are due to misconduct: Study confirms opaque notices distort the scientific record

A new study out in the Proceedings of the National Academy of Sciences (PNAS) today finds that two-thirds of retractions are because of some form of misconduct — a figure that’s higher than previously thought, thanks to unhelpful retraction notices that cause us to beat our heads against the wall here at Retraction Watch.

The study of 2,047 retractions in biomedical and life-science research articles in PubMed from 1973 until May 3, 2012 brings together three retraction researchers whose names may be familiar to Retraction Watch readers: Ferric Fang, Grant Steen, and Arturo Casadevall. Fang and Casadevall have published together, including on their Retraction Index, but this is the first paper by the trio.

The paper is — as we’ve come to expect from these three — an extremely careful analysis, the most comprehensive we’ve seen to date. Other studies have offered clues to these trends, but by looking at as many years of data as they did, and by including secondary sources on the reasons for retraction, this becomes a very important contribution to our understanding of what drives retraction.

The study is convincing evidence that we’re onto something when we say that unhelpful retraction notices distort the scientific record. We’re thrilled that the authors’ analysis of opaque retraction notices relies heavily on Retraction Watch posts, as indicated in Table S1, “Articles in which Cause of Retraction was Ascertained from Secondary Sources.” This is exactly what we’ve been hoping scholars would start doing with our individual posts — and we welcome more of these kinds of analyses.

When the authors reviewed the secondary sources available to them — news stories and Office of Research Integrity reports, in addition to Retraction Watch and others — they ended up reclassifying the cause of retraction in 158. That led the to conclude that

…only 21.3%of retractions were attributable to error. In contrast, 67.4% of retractions were attributable to misconduct, including fraud or suspected fraud (43.4%), duplicate publication (14.2%), and plagiarism (9.8%).

Compare that with Grant Steen’s findings of ten years’ worth of retractions (about a third as many as in the current paper), published early last year:

Error is more common than fraud; 73.5% of papers were retracted for error (or an undisclosed reason) whereas 26.6% of papers were retracted for fraud (table 1). The single most common reason for retraction was a scientific mistake, identified in 234 papers (31.5%). Fabrication, which includes data plagiarism, was more common than text plagiarism. Multiple reasons for retraction were cited for 67 papers (9.0%), but 134 papers (18.1%) were retracted for ambiguous reasons.

It’s now clear that the reason misconduct seemed to play a smaller role in retractions, according to previous studies, is that so many notices said nothing about why a paper was retracted. If scientific journals are as interested in correcting the literature as they’d like us to think they are, and want us to believe they’re transparent, the ones that fail to include that information need to take a lesson from those that do.

Yes, we’re looking at you, Journal of Biological Chemistry, as are the authors:

Policies regarding retraction announcements vary widely among journals, and some, such as the Journal of Biological Chemistry, routinely decline to provide any explanation for retraction. These factors have contributed to the systematic underestimation of the role of misconduct and the overestimation of the role of error in retractions (3, 4), and speak to the need for uniform standards regarding retraction notices (5).

Those standards exist, of course — here are COPE’s — but some journals don’t seem to think they’re worth following.

The fact that just one in five retractions is due to honest error suggests that researchers who say retractions should be reserved for fraud are simply reflecting common practice. There’s been an interesting debate recently about when a retraction is appropriate, and the findings may inform that, too.

The question, of course, is, how common is scientific misconduct? The simple but unsatisfying answer is that we don’t know, certainly not based on this study, because it’s only of retractions. Some of the best data we have comes from a 2009 paper in PLoS ONE by Daniele Fanelli. In it, Fanelli does his own survey, and combines findings from other surveys. He concludes:

A pooled weighted average of 1.97% (N = 7, 95%CI: 0.86–4.45) of scientists admitted to have fabricated, falsified or modified data or results at least once –a serious form of misconduct by any standard– and up to 33.7% admitted other questionable research practices. In surveys asking about the behaviour of colleagues, admission rates were 14.12% (N = 12, 95% CI: 9.91–19.72) for falsification, and up to 72% for other questionable research practices. Meta-regression showed that self reports surveys, surveys using the words “falsification” or “fabrication”, and mailed surveys yielded lower percentages of misconduct. When these factors were controlled for, misconduct was reported more frequently by medical/pharmacological researchers than others.

Considering that these surveys ask sensitive questions and have other limitations, it appears likely that this is a conservative estimate of the true prevalence of scientific misconduct.

In other words, 2% of scientists admit to having committed misconduct, but almost three-quarters say their colleagues have been involved in “questionable research practices.” But those may be low figures.

As the authors of the new PNAS study point out, all we can say for sure, based on their findings, is that misconduct plays more of a role in retractions than we thought it did. But we think they make a good argument for why retractions may be the canary in a coal mine when it comes to fraud, when they write that:

…only a fraction of fraudulent articles are retracted; (ii) there are other more common sources of unreliability in the literature (41–44); (iii) misconduct risks damaging the credibility of science; and (iv) fraud may be a sign of underlying counter-productive incentives that influence scientists (45, 46). A better understanding of retracted publications can inform efforts to reduce misconduct and error in science.

The paper is part of a growing oeuvre on retractions by the authors, two of whom have testified at the National Academy of Sciences:

We have previously argued that increased retractions and ethical breaches may result, at least in part, from the incentive system of science, which is based on a winner-takes-all economics that confers disproportionate rewards to winners in the form of grants, jobs, and prizes at a time of research funding scarcity (32, 46, 47).

The authors also found that the reasons for retraction seemed to vary by geography:

Most articles retracted for fraud have originated in countries with longstanding research traditions (e.g., United States, Germany, Japan) and are particularly problematic for high-impact journals. In contrast, plagiarism and duplicate publication often arise from countries that lack a longstanding research tradition, and such infractions often are associated with lower-impact journals (Fig. 3 and Table 1).

Those findings, as the authors make clear, are based on raw data, not a statistical analysis. That’s because to do the latter, and prove that a given reason for retraction was actually more common in a given country or region, you’d need the total number of papers published in that country or region, and that would go beyond what’s available in PubMed. Fang tells Retraction Watch:

Our analysis of geographical data was performed with a simple purpose in mind.  We were interested to see whether the geographical distribution of retractions differs depending on the cause (since the raw data showing countries of origin for papers retracted for fraud, plagiarism or duplicate publication have the same denominators, the three categories can be compared with each other).  This leads us to suggest that the dynamic of retractions for each of these causes is different in space (as well as in time), and should therefore be considered as separate events that are likely to have different underlying causes.  However it would not be appropriate to compare individual countries with each other, e.g. to say that plagiarism is more common in country X than in country Y, because that would require correction for the number of publications from each country.

The data do agree in general terms with those in another recent paper by medical writers in Australia. That paper, by Serina Stretton, Karen Woolley, and colleagues, could reliably conclude that first authors from lower-income countries have more retractions for plagiarism among those retractions for misconduct, but  for similar reasons could not determine whether such authors have more retractions for plagiarism as a rate of papers overall.

What will be interesting to watch is what happens if the authors, or anyone else, repeats this kind of analysis in a year, or five years. Will journals pay attention, and write more informative notices? If so, will we see the growth in misconduct among retractions continue? Retractions are increasing at a rate such that those in a given year may represent as much as a quarter of all those papers ever withdrawn. That could mean that the trends the authors identify could become even stronger.

Some of this may echo interviews that Ivan did about the study over the past week. We’ll update this post with links to those stories as they appear:

25 thoughts on “Majority of retractions are due to misconduct: Study confirms opaque notices distort the scientific record”

  1. ‘you’d need the total number of papers published in that country or region, and that would go beyond what’s available in PubMed’ – I think that this actually is available in PubMed, by using the [affl] tag under various countries and years.

    1. Agreed you can search by country of affiliation, but does PubMed have as good coverage of non-English language journals as it does of those in English?

  2. I did a similar research study when I was Associate Director in the Office of Research Integrity (ORI), working with Dr. Mary Scheetz of ORI and Dr. Sheldon Kotzin of NIH National Library of Medicine (Medlline). We submitted an abstract for Drummond Rennie’s 2005 Peer Review Congress (but it was not accepted for presentation):

    “We noticed a high correlation between the retraction of publications involving United States Public Health Service (USPHS) support as listed in MEDLINE® and the existence of a related research misconduct case involving one of the authors of the retracted publications. We describe an analysis of that data, and related data involving other cases known to ORI staff or made public as involving allegations of misconduct. Approximately 64% to 72% of such PHS-related retracted publications were found to be associated with allegations and/or findings of research misconduct known to ORI, over the years that such retractions have been indexed in MEDLINE. We speculate whether a significant fraction of the other 28% of the retracted papers may also have involved PHS-related research misconduct, rather than being retracted because of errors or other scientific judgment reasons. We furthermore encourage editors to examine seriously requests for retraction, to ensure that those involving scientific misconduct will identify the person/author responsible and exonerate the other authors. . . .

    “Over the past three decades there have been 572 retracted publications listed in MEDLINE, an average of about 20 publications retracted per year, about 30% of which involved PHS support. Of the 175 retracted publications citing in MEDLINE with PHS support, 114 (64%) were known by ORI staff to have involved cases in which research misconduct was alleged (some cases led inquires but no investigation, but others led to investigations, most of which found research misconduct for the authors/papers cited). When the 43 other retracted publications known to ORI staff to have been involved in such allegations or investigations related to PHS-appropriated funds are added, then of all of the 218 such retracted publications, 156 (72%) were known by ORI staff to have been related to research misconduct cases.”

  3. with regards to the finding that in the USA misconduct is related to higher impact papers than in other countries, I would guess this is directly related to the fact that for getting tenure in a good US university or to get an NIH grant, you need publications in high-impact journals, while in Asian countries a series of papers in lesser-impact journals may suffice.

  4. At present there is a fundamental flaw with the conclusions regarding retractions, which stems from the false assumption that ALL papers which deserve to be retracted are actually retracted. In other words, currently the conclusions are based on incomplete data which does not represent (i.e. masks) the real degree of misconduct and its distribution (countries, universities, fields, causes, etc.).

    In reality, the editors/publishers/institutions resist (often fiercely) any calls for retraction, even when the misconduct is well documented. As I have pointed out on RW, “The higher the position of the Faculty member and the more serious the misconduct is, the more contraventions of their own Framework they will commit to cover it up”.

    Evidence 1:
    Elsevier retracts papers (in English) of Chinese authors for “substantial duplication” of publications in Chinese (!) http://www.retractionwatch.com/2012/09/20/slew-of-retractions-appears-in-neuroscience-letters/

    but at the same time refuses to do so in identical case of substantial duplication of publications in English, when the authors are based in Canada (University of Toronto) and Spain (Universitat Pompeu Fabra). (see my comments at the above RW post)

    Evidence 2:
    There is NO retraction in a case of what I can call Mega-Duplication (100% duplication) where the whole paper has been reproduced 1:1 (please note that it appears twice in English, i.e. there is no even translation to another language for providing some kind of justification!) when the author is based in USA (Johns Hopkins University Bloomberg School of Public Health).
    According to PubMed
    http://www.ncbi.nlm.nih.gov/pubmed/19771949,19276329?report=docsum

    one and the very same paper appears twice:
    1. What we mean by social determinants of health.
    Navarro V., Int J Health Serv. 2009;39(3):423-41. PMID:19771949
    2. What we mean by social determinants of health.
    Navarro V., Glob Health Promot. 2009 Mar;16(1):5-16. PMID:19276329

    Conclusion:
    At present the conclusions regarding retractions do not represent the real degree of misconduct and its distribution around the world. Transparency Index, which takes into account whether the editors/publishers/institutions Did-the-Right-Thing when evidence for misconduct has been presented to them (and not the mere existence of Frameworks/Guidelines to deal with misconduct), has the potential to provide more realistic picture regarding the misconduct and its distribution around the world.

    1. YKBOA, the two entries you refer to explicitly note

      “Republished in Int J Health Serv. 2009;39(3):423-41.” and “Republished from Glob Health Promot. 2009 Mar;16(1):5-16.”, respectively.

      It is apparently explicitly mentioned in the Int J Health Serv that this is a republication, or PubMed cannot add this comment. Quite different from the Neuroscience Letters situation, where the authors did *not* inform the journal that this was largely a republication of earlier work, and likely did not ask for permission either.

      It also is not a republication of scientific data, but of an inaugural address; that is, it is not even a peer reviewed paper that is meant to convey new results.

      1. Why should you get another publication because you give a lecture?
        Such things should not even be entered into PubMed.
        The word “republished” should appear in the title, not in a sidebar.

      2. Marco Berns:
        Again, it is clearly mentioned, so there simply is no misconduct. The journals know and have approved of the duplication. It’s the same with the occasional translation of prior papers. Journals do have such at times, and as long as it is clearly mentioned, I don’t see the problem.

      3. YKBOA, why should I give you an explanation to your “evidence 1”? I’m not your slave, responding to your every whim.

        And no, Marco Berns did not give “the right answer”, because the supposed double standard you claim is not present in the second case you mentioned. The Chinese authors did not get approval of the journals to duplicate, they did not even ask. This is misconduct. Asking and getting approval to duplicate is not misconduct. Simple. It may be a valid question to ask why the journals allowed the duplication, but that’s an Editorial decision that is theirs to take, and not yours to then accuse the author of misconduct. Why don’t you ask the journals why they allowed that? Or are you not interested in actually doing something, other than anonymously moan about perceived injustices?

  5. Greater retractions for misconduct that error does make sense (if you didn’t already say this- I skimmed this entry the other day) because there are other avenues for errors- errata and the like.

    1. Yeah, in recent years in web of science, there have been about 0.75% correction notices (errata and other corrections) and 0.02% retraction notices. Most of the errata are trivial so I’ve never worked out how many are substantive corrections.

      1. 20 years in science and 10 direct bosses. One has been caught fabricating and manipulating data,
        another is under suspicion. I asked a colleague in how many labs she had seen people making things up.
        2 out of 7. I think that something like this (not so far out of line with what Daniele Fanelli found) is more like reality than miniscule percentages that the establishment, e.g. Nature, tries to reassure itself with. Of course we have noses so we can rest spectacles on them. Always going on about how it is only a small minority and knowingly using “cleverology” to obscure the problem.

        Recently I read a piece in Nature, where the organ has changed its tune, now it is O.K. to report scientific misconduct anonymously. The article suggested using “telephone hotlines”. I have never seen such things. Where are they?

        http://www.nature.com/naturejobs/science/articles/10.1038/nj7396-137a

        “If the concerns persist, the next step is to decide how to lodge the complaint — either anonymously using telephone hotlines found in many institutes, or in person”

  6. Retractions are not isolated. As we can see, if somebody does “play with their data” once, they will, or have done it before.
    My suggestion is: once we have a retraction or multiple “corrections” from the same group is to go back and look at all their publications, I am sure we will be shocked by what we find

  7. Congratulations to Fang et al. for completing such a major research project. As will become evident, we have direct insight into the effort required. We were interested in this paper (and not just because we were mentioned in the article above) for three reasons:

    1. The results of this paper (with Steen as co-author) overturn the conclusions of a previous and relatively recent paper (with Steen as first and only author; JME 2011). We are pleased that in this latest paper, the authors have changed Steen’s definition of misconduct and decided to include plagiarism as misconduct. Steen’s earlier paper classified plagiarism as error leading to the completely different conclusion that ERROR was twice as high as misconduct.

    2. The results of this paper align much more closely with the original research we presented on this topic at the Peer Review Congress (2009) and published in CMRO (2011). Even back in 2008, our temporal analysis showed that misconduct was overtaking error (ie, “The results from this study also indicate that misconduct retractions are increasing over time”). The latest analysis by Fang et al. confirms our results – indeed Fang et al. note that “Perhaps most significantly, we find that most retracted articles result from misconduct…”; the trend we picked up in 2008 has now been confirmed. Surprisingly, our paper, which highlighted this important finding, was not cited by Fang et al., even though it was published in a well-respected, peer-reviewed, MEDLINE-listed journal, was one of the top 10 most downloaded papers from CMRO that year, was cited in Retraction Watch, and was even commented on by Steen. A number of other conclusions that we made in that paper have also been confirmed in the paper by Fang et al., and were highlighted by Retraction Watch:

    “Retractions for fraud or other reasons occurred much more often with papers having one author, articles from countries considered to be low or middle income — Iran, China, India, etc. — and papers with an author who’d had at least one other retraction. All of which matches our brief experience covering this subject and, as far as the first part, makes perfect sense: after all, it’s easier to get away with fraud when there’s no one to rat you out.”

    3. The major effort made by Fang et al. (and we do recognise the time required for such research!) contributes more evidence to help us identify and prioritise resources to address the different types of misconduct. For example, evidence-based strategies designed to reduce plagiarism in China are likely to be much different than the evidence-based strategies designed to reduce data falsification and fabrication in the US. In our latest research (cited in the article above; Stretton et al., CMRO 2012; also presented internationally 2011), we provide more evidence on plagiarism misconduct.

    We hope our new paper and indeed our previous paper are of interest to Fang et al. and all those interested in evidence on misconduct retractions.

    Dr Serina Stretton and Professor Karen Woolley

  8. I’m a journal editor and have always been wary about accusing authors of misconduct for fear of litigation. I have rejected manuscripts that I thought were dodgy but would not communicate this reason for rejection to the authors. While I can understand that not disclosing reasons for retraction when misconduct is suspected will be frustrating for readers of retraction watch, don’t you think that editors and publishers should be mindful of the risk of litigation? Especially where a case has not been proven.

    1. I don’t find it surprising at all. But is there nothing you can do with suspicions? For example, contacting the institution’s officer for scientific conduct? It is unlikely that would actually result in anything, but there must be some system that could be put in place that could register such concerns.

      1. Wouldn’t phrases like “suspected image manipulation” or “alleged plagiarism” be safe from litigation, especially if there is clear evidence like figure splicing or documented identity of text?

  9. In reply to Marco, October 21, 2012 at 7:12 am

    RE: Evidence 1
    FYI, for over 1 year now I’m asking the journals, I’m asking the institutions, I’m asking the publisher, I’m asking COPE, and so far they ALL are very reluctant to do the right thing. As a result of this reluctance, these authors are committing more misconduct since.
    From your answer I can see that you did not bother even to look into all details of this case, but you are very selective when making judgements. Whether you realise it or not, you only encourage fraudsters.
    Answering to you is a great waste of time.
    Farewell.

    1. I have absolutely the same experience with the editor of Gaceta Sanitaria, Elsevier, and COPE.
      I wonder: Are you and me “exceptions” or the rule of How Elsevier and COPE treat obvious and straight forward misconduct?

      Transparency Index which shows whether the editors/publishers/institutions Do-the-Right-Thing (when evidence for misconduct is presented to them) has the potential to safeguard the implementation of One-Rule-for-All. If the academic publishing embraces the rule of law, then TI is the tool to ensure it.

      P.S. May I feature my case which involves University of Toronto, Elsevier and COPE on your website?

      1. I don’t think you can blame COPE here, they can only advise, not enforce.
        Probably neither Elsevier, because some journals they publish apparently deal quite well with retractions, at least comparatively.
        This just leaves the Editor/Editorial Board etc. of the particular journal, which may be hesitant to get involved, not only for legal reasons, but also simply for lack of time.
        (the above is only supposed to be an explanation, because of course I agree with you that lack of action when misconduct is suspected can never be justified).

  10. Could it be possible for RW to identify and link to the file that contains the full listing of the >2000 retractions that served as the basis set for this study? This is generally done in informatics types of reports, but I did not see such a file in the Supplementary Information associated with the PNAS article. You would ease and promote other comparative and novel studies of the same data set, through its availability here (as opposed to others regenerating distinct data sets, and/or each interested party making the request of the study author). A thought to consider, for a new RW article, or update of this one. Cheers. PNP

  11. I would like to know how to take legal action against a journal whose editor is biased and refuses to retract a fake and plagiarized paper?

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.