When it comes to retracting papers by the world’s most prolific scientific fraudsters, journals have room for improvement

Journals have retracted all but 19 of the 313 tainted papers linked to three of the most notorious fraudsters in science, with only stragglers left in the literature. But editors and publishers have been less diligent when it comes to delivering optimal retraction notices for the affected articles.

That’s the verdict of a new analysis in the journal Anaesthesia, which found that 15% of retraction notices for the affected papers fail fully to meet standards from the Committee for Publication Ethics (COPE). Many lacked appropriate language and requisite watermarks stating that the articles had been removed, and some have vanished from the literature.

The article was written by U. M. McHugh, of University Hospital in Galway, Ireland, and Steven Yentis, a consultant anaesthetist at Chelsea & Westminster Hospital in London. Yentis was editor of Anaesthesia during the three scandals and had a first-hand view of two of the investigations. He also is the editor who unleashed anesthetist and self-trained statistician John Carlisle on the Fujii papers to see how likely the Japanese researcher’s data were to be valid (answer: not very likely).

McHugh and Yentis looked at the fates of the 192 papers tabbed for retraction during institutional investigations into the work of  Yoshitaka Fujii — the most prolific science fraudster to date; the 98 tainted articles from Joachim Boldt (number 2 on our leaderboard); and 23 such papers by Scott Reuben, the pain specialist in Massachusetts whose misconduct case helped spark this website. (The findings of the various investigations are linked here.) 

All of Reuben’s papers have been retracted by the 10 journals that published his bogus findings. But for Boldt and Fujii, the record isn’t quite so perfect. McHugh and Yentis found that 35 of the combined 290 articles the two anesthesiologists had yet to be retracted when they began their analysis, despite multiple investigations and significant public attention over nearly a decade. (An analysis in 2013 found that nine — or 10% — of the 88 papers by Boldt then slated for retraction hadn’t been retracted yet.)

The project also involved emailing editors late last year about the status of the papers; as a result, Yentis said, journals retracted another 16 articles by Boldt and Fujii by this May. That reduced the gap to 19 — or 6% of the total.

According to the remedial journals, reasons for failing to retract included:

  • journal still ‘looking into’ the issue: six journals (seven papers)
  • decided not to retract because the article was a review: one journal (one paper)
  • not intending to act: one journal (one paper)
  • Awaiting a specific recommendation to retract from institutions:
  • one journal (six papers)
  • Article not listed as definitely fraudulent in institutional investigation (although not confirmed as genuine): one journal (one paper); no action taken

Yentis told us by email that he was surprised that so many Fujii papers have yet to be retracted.

Also, the survival curves the retractions showed a much slower rate of retractions for Fujii than for Reuben and Boldt; given the scale of Fujii’s wrongdoing (and the press coverage that followed), I would have expected much more prompt action on the part of the journals. Perhaps it reflected the broad range of journals and publishers, or the time since the original paper had been published.

I think what it’s thrown up is the gap between the various parties’ roles once fabricated papers have been identified as such: there may be several journals affected but it’s not clear currently who has the responsibility for informing them all. I would argue this must be the responsibility of the bodies conducting the investigations.

Regarding the inadequate notices, Yentis said giving COPE sharper teeth might help:  

There’s clearly another gap, between COPE’s issuing of standards and the adherence to them by journals and publishers. COPE’s only power is that of persuasion, although I agree that COPE could be more vigorous in doing so – though I’d imagine there are resource restraints preventing this, plus perhaps the wish to avoid being seen as a regulator.

As editor-in-chief of Anaesthesia, Yentis and his colleagues performed “internal audits” using COPE guidelines as a benchmark:

It’d be nice to see a table displayed on each journal’s website of what they do or don’t meet, though I guess the ones that don’t do much would be the least likely to post such a table

Although Fujii’s place in the pantheon of fraudsters seems secure for the moment, Yentis said the smart money is on a worthy successor or two yet to be detected:

All I can say to editors and publishers is that it’s *really* tedious to deal with, but the responsibility to pursue wrongdoing goes with the position, and cannot be shirked, whatever the scale. COPE’s flowhcarts and guidelines are so so useful, and provide a template for how to deal with it.

Like Retraction Watch? You can make a tax-deductible contribution to support our growth, follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up for an email every time there’s a new post (look for the “follow” button at the lower right part of your screen), or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at team@retractionwatch.com.

2 thoughts on “When it comes to retracting papers by the world’s most prolific scientific fraudsters, journals have room for improvement”

  1. COPE should begin maintaining a list of compliant and non-compliant journals so scientists can choose to publish in journals that take such standards seriously.

  2. Thanks, Steve and Adam, for the post.

    Steve, you say in your Anaesthesia article “We think that the weakest aspect of investigations into research misconduct is that there is no coordinated mechanism to inform journals not involved in the index investigation.” You extend that to suggest that harmonizing attempts to maintain integrity of the published record might be the responsibility of “the bodies conducting the investigations, that is, the institutions.”

    There’s lots going on right now to address this.

    The recent RePAIR Consensus Guidelines and the CLUE recommendations are worth revisiting, and suggest what that “harmonizing” might need. Connecting the people responsible for research integrity at universities and at publishers seems like an essential step, too. COPE is working with 6 universities and research institutions on a pilot to understand what COPE might provide as package of support and advice for research institutions. (And big thanks to the people from Caltech, Ohio State University, Ottawa Hospital Research Institute and University of Ottawa, Queensland University of Technology, University of Hong Kong). In May this year the Russell Group Integrity Forum with COPE held a research integrity workshop for Russell Group universities, editors and publishers, about key research integrity and publishing ethics issues they face and share. UKRIO invited people from the Russell Group Integrity Forum and COPE to share outcomes at its annual meeting. Coming up in September, Ohio State University is holding a summit prior to the ARIO annual conference, titled “Seeking Solutions in Research Integrity: A View from All Perspectives” at which COPE people are speaking.

    So the issue of communication between institutions and journals is one many of us are exploring in great depth. How we get from that to a coordinated mechanism to inform journals of the outcomes from investigations feels like what needs to come next.

Leave a Reply to Steven McKinney Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.