Researchers in China have received an expression of concern for a recent paper on COVID-19 pneumonia after editors were alerted to suspicious similarities between the tables in the article and those in a 2018 study by members of the same group.
In case you missed that: The pandemic started long after 2018.
The article, “Lung ultrasound score in establishing the timing of intubation in COVID-19 interstitial pneumonia: A preliminary retrospective observational study,” appeared in early September in PLOS ONE. Led by Xiao Lu, the authors were affiliated with the Department of Emergency Medicine at Zhejiang University School of Medicine, in Hangzhou.
The authors appear to have been submitting problematic data, as well as overlapping text, to several journals, according to one editor who spotted unlikely patterns in their results and raised alarms. But the researchers say the flaws resulted from lack of recordkeeping rather than misconduct.
According to the expression of concern, which is dated Nov. 30, 2020:
Following the publication of this article [1], concerns were raised regarding the similarity between results reported in Table 1 and Table 2 in this article, and results in an article previously published in the Journal of Intensive Care Medicine [2]. Specifically,The BMI and SOFA scores reported in Table 1 of [1] are identical to the BMI and SOFA scores reported in Table 1 of [2] despite describing different study populations.
The reported P value for the gender division parameters in Tables 1 of both articles [1, 2] is identical.
The Respiratory rate and the PaCO2, mmHg scores reported in Table 2 of [1] are identical to the T1 (Initial EICU presentation 2 hours) Respiratory rate and the PaCO2, mmHg scores reported in Table 2 of [2] despite describing different study populations.
The Pulse rate reported in Table 2 of [1] is more similar to the Pulse rate reported in Table 2 of [2] than would be expected from independent studies.
The corresponding author agrees there are similarities between the data reported in these articles [1, 2] and indicated they are checking the underlying data.
PLOS ONE is currently reassessing the article and following up on the above issues in accordance with COPE guidance and journal policies. Meanwhile, the PLOS ONE Editors issue this Expression of Concern.
The JICM paper, on which Lu also is first author, is titled “Bedside ultrasound assessment of lung reaeration in patients with blunt thoracic injury receiving high-flow nasal cannula oxygen therapy: a retrospective study.”
In response to an email request for comment, Lu told us:
we do find there are some similarities between the data reported in the paper and my another article 2018 Journal Of Intensive Care Medicine article. We will check all the data and find the reasons. As the information in our paper, the patients were conducted in one makeshift ICU in Wuhan,and our medicial team was support to built this ICU and treat the patients there. We do use the LUS [lung ultrasound] to check the patients everyday and it was also proved to be a useful tool. However the LUS of the patients were not recorded in the original documen [sic], and the original documens of the patients were just left in that makeshift ICU and had not been preservation by us. We just record patients information in the word files as we finished our job in Wuhan. We could check the info recoeded [sic] in our word files. May be the problem was in the file.
We do not deny the questionable points of these data, espesilly [sic] the patient characteristics at baseline; but we didn’t deliberately fake it or copy the data as we do the study before. I will be responsible for this as the first and Corresponding author.
John Loadsman, the chief editor of Anaesthesia and Intensive Care, told us the problems don’t end there:
The first author (corresponding on the PLoS One paper) submitted a third paper to my journal reporting a study of 98 patients (49 per group). It was about using two forms of non-invasive ventilatory support to prevent extubation failure, so quite a different study to the previous publications. Nevertheless, and as with the PLoS One paper, many of the means, standard deviations and p-values in both the demographic and results tables were identical to those in the other two papers. I noticed the problem with the data after our astute SAGE Peer Review Manager brought potential text similarity issues to my attention. I understand investigations by the other two journals are ongoing.
Loadsman noted that all three articles acknowledge the help of an editing services outfit called LetPub, which has offices in the United States, Europe, South America and Asia.
Clark Holdsworth, the research communications manager at LetPub’s parent company, Accdon, said problems with the articles his firm works on “always disappointing to see,” but added that LetPub doesn’t get involved in the production of tables:
This manuscript came to us for our standard language editing service and would have been edited for spelling, punctuation, and grammar. We are only able to provide editorial support, so we do not produce figures or tables. Essentially, these projects go through the same thorough copy editing performed by most journals prior to publication.
Regarding figure editing, the editor responsible for the manuscript receives them as a PDF for reference only. In the event of a typo contained within the figures, the editor would simply specify the correction in a comment, for the author to revise themselves in whichever software they used for generating the figure. Regarding table editing, tables are available to the editor in the Word document and the editor revises these for spelling, punctuation, and grammar, unless the author specifically requests for us to exclude them from the scope of editing, in which case the editor still receives them for reference. As you might imagine, our revisions to tables are found in the title, legend, and column/row headings.
By our count, the EoC marks the fourth such notice for COVID-19 articles. Another 44 have been retracted or temporarily retracted. Did we miss any? Let us know.
Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on Twitter, like us on Facebook, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].
If I understood the article correctly, the original data is no longer available. That fact, coupled with the overlapping figures in the other papers, suggests it’s an easy editorial decision to retract this one now. It’s not clear how the authors can remedy the issue of missing data, if it ever existed.