Authors claim clinical trial data came from one center. It came from three.

A BMJ journal has retracted a 2017 paper that made a false claim about the clinical trial in question. 

The Acupuncture in Medicine paper reported the results of a clinical trial about the impact of acupuncture and Chinese herbal medicine on stroke, gathered from one center. However, in November, the editors of the journal discovered that the authors had completed the trial at three centers, and had already published the data in Scientific Reports in 2016. The authors say the duplication and misrepresentation of the data stemmed from “confusion and misunderstanding.”

The editor-in-chief David Carr told Retraction Watch that an editor not involved in the peer review process had found the Scientific Reports paper, published before they submitted the paper to Acupuncture in Medicine. Carr questioned the authors about the data and in early December:

The authors confirmed the data were from the same study.

According to the retraction notice for “Effect of acupuncture and Chinese herbal medicine on subacute stroke outcomes: a single center randomized controlled trial,” the third author Lifang Chen took responsibility for the “mistake.” Chen, who works at The Third Affiliated Hospital of Zhejiang Chinese Medical University, Hangzhou, told us that a combination of miscommunication and carelessness on her part were to blame for the situation.

Chen explained that in October 2015, before the authors had submitted the Scientific Reports article, she had asked the second author Crystal Lynn Keeler to write a paper focused on data from a single center. Chen said she “thought this single center trial deserved to be published” on its own, but did not realize that would be considered “redundant publication:”

… we thought the data of these two articles are different, and the single center trial was conducted independently. This condition is allowed in China.

Keeler, who received her doctorate from Zhejiang Chinese Medical University, explained that she drafted the paper from the materials she received but:

There was confusion and misunderstanding due to my being in the United States, due to language barriers, and due to electronic complications.  … I thought they reduced the study sample because of the huge costs, since I was only given the one-center 120 participant data to analyze and write-up. … I did not know they split it off from the 3-center trial data.

Carr told us that the journal had also raised concerns about the study during peer review because the trial registration mentioned a three-center design; however, the authors told the reviewers that the study had been changed from multi- to single-center, and that none of the content had been published or accepted elsewhere. Carr said that “the author[s] were taken at their word” but:

In retrospect, this was clearly a misrepresentation, as it transpired that patients were indeed recruited from the other two centres, with the full three-centre dataset having been published in Sci Reports just a few days prior to their submission to us.

Keeler explained that she had responded to the reviewer comments because a firewall issue in China prevented the other authors from doing so:

The pop-ups in the reviewer comment platform do not work in China.  The only way for researchers to respond to platform-based reviewer comments in a pop-up is to have someone in another country respond. … Incorrectly, I assumed that my responses to the reviewers were valid.  

Keeler also noted:

If I had been in China communicating on a daily basis with my colleagues, I think I would have caught the error  … This was a very unfortunate series of events. I would never have submitted this paper if I had known.

Keeler was a co-author on the 2016 Scientific Reports paper; she explained that she proof-read many of the group’s papers:

My group ended up listing me as author on a couple of those papers … [but] I do not remember every paper I proofread

Chen said the retraction is ultimately her responsibility because she did not read the manuscript carefully enough before submitting, and didn’t catch the false statement:

It’s my fault …  I should take full responsibility.

Like Retraction Watch? You can make a tax-deductible contribution to support our growth, follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up for an email every time there’s a new post (look for the “follow” button at the lower right part of your screen), or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].

3 thoughts on “Authors claim clinical trial data came from one center. It came from three.”

  1. I know I’m repeating myself, but the impression a retraction gives to the general readership is, that the results reported there are wrong or at least unreliable. In all other kinds of misconduct a reader can still rely on what he has seen. If a result, he bases an argument on, is retracted, it does not help him in the least, if the same result is published in another place, he’s not aware of. All-purpose-punishment is a misuse of the instrument of retraction.

    1. Retraction is appropriate in situations where the same data – unbeknownst to reviewers, editors, or readers – is used to reach similar conclusions in separate papers.

      Even if the analysis is ‘honest’ in both manuscripts, the duplicate publication of the data gives a misleading impression of replication. Given that replication is both important and infrequent, a republication that misrepresents itself as a replication warrants retraction.

      Both editors and reviewers might have reached very different conclusions about the impact and novelty of a result – and consequently its value as a new publication – had they known it was a reanalysis of already-published data rather than a new study.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.