Caveat scriptor: How a journal editor unraveled the mystery of the overlapping bad data

John Loadsman

Caveat scriptor—writer beware.

That’s the moral of a recent editorial in the Saudi Journal of Anesthesia, prompted by the retraction in that journal of a 2014 paper with bum data.

The editorial was written by John Loadsman, an anesthesiologist in Sydney, Australia, and editor of the journal Anaesthesia and Intensive Care, who played a role in the retraction. Here’s how.

According to Loadsman, he was considering an article for his journal — a meta-analysis of previously published findings. On inspection, he he noticed that some of the studies cited in the meta-analysis were potentially problematic, including

the fact that the data in two of the papers included in the meta-analysis were near identical in many respects. The outcome was twofold – the meta-analysis, the result of a lot of very hard work, had to be rejected for publication because at least some of the papers on which it was based were shown to be unreliable; and then the editors of both journals which published the papers with overlapping data had to be notified.  

Loadsman notified the Saudi journal, which has pulled the offending paper, titled “Evaluation of gabapentin and dexamethasone alone or in combination for pain control after adenotonsillectomy in children:”

The original article titled “Evaluation of gabapentin and dexamethasone alone or in combination for pain control after adenotonsillectomy in children ” published in Saudi Journal of Anaesthesia, on pages 317-322, Volume 8, Issue 3, 2014,[1] is being retracted as the author of this article has published a similar study in ‘Anesthesia Essays and Researches, on pages 167-70, Volume 5, Issue 2, in 2011 with identical data showed in the tables.[2] Plagiarism, fabrication, unethical or redundant publication violates the editorial policy of Saudi Journal of Anaesthesia, which follows best practice guidelines given by the International Committee of Medical Journal Editors (ICMJE) and Committee on Publication Ethics (COPE) mentioned on the Information for Authors and as codified in the signed statements made by the authors regarding the copyright of their work.

This article has been retracted on request of the Editor of the journal.

The doppelganger article remains in Anesthesia Essays and Researches, a Saudi title from Wolters Kluwer. We asked the editor of the journal why it has not retracted the paper; he said someone would get back to us.

Caveat scriptor

Meanwhile, Loadsman writes, the stakes of misconduct couldn’t be higher — for scientists as well as the public:

Caveat scriptor. This applies first to the writers of meta-analyses. These reviews often involve considerable effort, effort that might well be wasted if the review is unpublishable or, worse, potentially retracted or corrected after publication as a result of the inclusion of unrecognized fraudulent data from other sources. Writers of literature reviews and meta-analyses must take great care to critically evaluate the veracity of the data they incorporate.

But, more importantly, authors who have already published or who are intending to submit manuscripts containing fraudulent or falsified data, words that are not their own or have been inappropriately reused, or some other problem that falls into the broad classification of misconduct, need to understand that they run a very real and ever-increasing risk of being caught. The consequences in such cases can be severe – reputations, finances, personal liberty, and even, it now appears, lives have been lost.

‘We are waiting for Godot’

Loadsman has been busy lately. He wrote another editorial, this one in the UK journal Anaesthesia, lamenting why journals take so long to retract papers they know need removal — longer even than the papers they publish. (The editorial was in response to an article the journal published in August about the status of papers by fraud titans Yoshitaka Fujii and Joachim Boldt that had been slated for retraction. As we reported, journals had yet to officially remove 19 of the 313 combined tainted papers.)

Other than more and better education about the issue, Loadsman’s essay — which is worth a read — is short on solutions. But one, which we have advocated in the past, is to make retractions part of the quality assessment for journals:

Perhaps editors might take a more proactive and timely approach to retracting papers if that activity was incorporated into journal performance indicators. Retracted papers are often cited as evidence, which editors of the cited journals cannot do much about, but when this occurs it would be relatively easy for the citing journal to have some penalty subtracted from their Impact Factor.

Perhaps. Still, Loadsman notes that:

Prevention is, as usual, better than cure, and just as the conduct and publication of problematic research is driven by perverse incentives (institutional and individual reputations, grants, tenure, salaries and impact factors to name but a few), the investigation of alleged misconduct and the public correction of corrupt science where necessary are, unfortunately, hindered by perverse disincentives of the very same nature. Unless we can eliminate or manage these incentives at both ends of the process – corruption and correction – we are waiting for Godot.

Like Retraction Watch? You can make a tax-deductible contribution to support our growth, follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up for an email every time there’s a new post (look for the “follow” button at the lower right part of your screen), or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].

4 thoughts on “Caveat scriptor: How a journal editor unraveled the mystery of the overlapping bad data”

  1. I am a journal editor for a high quality journal and I see fraudulent papers at least once a week, usually more often. Plagiarism is rampant among our submissions. I wish there was more that I can do besides rejecting these papers as I suspect they immediately submit them elsewhere.

  2. This does beg the question, if an author preparing a meta-analysis observes suspicious activity in papers they are reviewing (self-plagiarism, for example) what step should they take?

    1. This is an excellent question for which I bet there is little, if any, general guidance available. Taking a cue from the steps taken by John Loadsman, if while in the process of gathering studies for a meta-analysis a researcher has serious suspicions of duplicate data (published as covert duplicates in separate papers/conference presentations), that researcher’s first duty is to seek clarification from the authors of the concerned papers. If there is no immediate response or the response is less than adequate, then that correspondence should perhaps be forwarded to the editors of the journals involved. Given that these issues take way too long to resolve, the researcher may wish to include the earlier data set in the meta-analysis (assuming no other issues exist with the paper), but then identify in the resulting publication and in the form of a footnote or similar mechanism, the other suspected duplicate with a short explanation as to why it was not included.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.