Can you spot the signs of retraction? Just count the errors, says a new study

downloadClinical studies that eventually get retracted are originally published with significantly more errors than non-retracted trials from the same journal, according to a new study in BMJ.

The authors actually called the errors “discrepancies” — for example, mathematical mistakes such as incorrect percentages of patients in a subgroup, contradictory results, or statistical errors.

The study doesn’t predict which papers will eventually be retracted, since such discrepancies occur frequently (including one in the paper itself), but the authors suggest a preponderance could serve as an “early and accessible signal of unreliability.”

According to the authors, all based at Imperial College London, you see a lot more of these in papers that are eventually retracted:

Of 479 discrepancies found in the 100 trial reports, 348 were in the 50 retracted reports and 131 in the 50 unretracted reports. On average, individual retracted reports had a greater number of discrepancies than unretracted reports (median 4 (interquartile range 2-8.75) v 0 (0-5); P<0.001). Papers with a discrepancy were significantly more likely to be retracted than those without a discrepancy (odds ratio 5.7 (95% confidence interval 2.2 to 14.5); P<0.001).

During the study, three scientists reviewed 50 retracted clinical trials and 50 unretracted trials as controls, all the previous clinical trial published in the same journal as the retraction. The trials were retracted for various reasons — nearly half for misconduct, and the rest for errors, plagiarism, and duplication. In six studies, no reason was provided.

The scientists — blinded to which study was retracted — reviewed the reports and counted the discrepancies.

They placed the discrepancies into categories — for instance, an “impossible percentage”, such as 31.2% of 200 patients having an infarct (the smallest fraction of 200 people is 0.5%). Another category was “impossible summary statistics,” such as stating the median stay in the ICU was 13 days, when the data ranged from 14 to 444 days. There were also basic “arithmetical errors,” like saying three patient subgroups of five, five and six patients added up to 15 total.

First author Graham Cole told us he was surprised to see how many papers — both retracted and unretracted — contained such discrepancies:

Our main finding was that discrepancies are more common in retracted trial reports.  However, we were surprised to find that almost half of unretracted clinical trial reports also contained discrepancies.  When discrepancies are found, authors with nothing to hide should have no problem providing raw data to reassure the community that their findings can be trusted.  Our experience of approaching authors, journals and even organisations about discrepancies in published work has been that people do not correct them and even tell us that they don’t mean anything.

The authors learned how easy it is to make mistakes the hard way — when a reader found a discrepancy in this very paper, Cole added:

While our paper suggests that it is difficult to avoid discrepancies, we relearned this lesson a few hours after publication when a reader found a discrepancy in our paper and wrote a rapid response, asking for our raw data (which we had recommended as the course of action when a discrepancy is found). The good news is that the BMJ has a very effective rapid response system. Moreover we had already uploaded web appendices with the raw data.  Therefore it took us only a few hours to respond publicly, apologising for the error, asking the BMJ for a correction and directing readers to the raw data.

Cole noted that he and his co-authors requested the correction Tuesday to “a table legend where the order of bars was reversed in the diagram but not in the legend,” and the corrections editor is processing it. In the meantime, they made the raw data available under “data supplement” in the Article tools menu.
Multiple seemingly small mistakes could prove important, the authors conclude in the abstract:

Discrepancies in published trial reports should no longer be assumed to be unimportant. Scientists, blinded to retraction status and with no specialist skill in the field, identify significantly more discrepancies in retracted than unretracted reports of clinical trials. Discrepancies could be an early and accessible signal of unreliability in clinical trial reports.

However, it’s important to note that this paper doesn’t try to predict what will be retracted — it simply looks back at what differed between retracted and non-retracted papers. So its predictive value in spotting future retractions may be limited.

It’s also not a huge surprise that a retracted paper would contain more discrepancies, since it could be retracted on that issue alone. Moreover, discrepancies are not a measure of the quality of the science — a paper with fewer discrepancies does not necessarily contain sounder science, and vice versa.

Hat tip: Cardiobrief

Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, and sign up on our homepage for an email every time there’s a new post. Click here to review our Comments Policy.

4 thoughts on “Can you spot the signs of retraction? Just count the errors, says a new study”

  1. Maybe discrepancies are not a measure of the quality of the science, but don’t they make you wonder—at least a little—whether such sloppiness reflects similar sloppiness in the laboratory or at the bedside?

  2. I keep on asking the same question: What about the co-authors? Errors and mistakes happen, at least they all too often happen to me, so why not to first authors also. But should not somebody who claims to have written an article and accepts credit for it at least have read it carefully? Here too Cole had FOUR co-authors and a reversed order of bars in a diagram is not easy to miss. What right had they to see their names in there?

  3. As guarantor I am responsible for any discrepancies in the paper. I quite understand Axel’s feeling but I can confirm that all of us authors earned our places on this article.

    I understand that it might seem surprising that all 5 of us missed the mismatch in the direction of ordering of the bars, but it only goes to show that one often does not see what one is not looking for.

    The BMJ has already kindly issued a correction (http://www.bmj.com/content/351/bmj.h5134). In our study, we only counted discrepancies that were left uncorrected.

    Our article recommended that when discrepancies occur, the authors should be asked to provide the raw data first (as this should be easy to do immediately), and an explanation when they can (since this may require conferring amongst people not in the same place). This shows the advantage of uploading the raw data with the manuscript as an online appendix, as we did. The data not only become immediately available but also become protected from the all-too-common sequelae of research integrity investigations which include sudden fires, unseasonable downpours and accidental shredding.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.