Courage and correction: how editors handle – and mishandle – errors in their journals

Jasmine Jamshidi-Naeini

Last year, our group noticed an improper analysis of a purported cluster randomized trial (cRCT) in eClinicalMedicine, a Lancet journal, and requested deidentified raw data from the authors to conduct a proper analysis for the study design. 

Things were off to a good start. The authors shared their data immediately – which is commendable and, in our experience, rare. We reanalyzed the data using valid statistical procedures, which overturned the published conclusions. We subsequently submitted a manuscript describing our findings to the journal where the original paper was published.

That’s when things stopped going well.

First, the editorial team rejected our manuscript “in light of [the journal’s] pipeline” with no indication that the scientific merits of our manuscript had been evaluated. We wrote to the editor-in-chief, quoting COPE standards and the publisher’s policy about maintaining the integrity of the scientific record. After an additional exchange, the editorial team asked us to share our analyses with the authors and for us to seek their reply. 

We didn’t see why that was a good idea. Publicly correcting unequivocal errors that are not matters of a difference in scientific opinion should not be conditional on the authors’ reply. We therefore reiterated our request for our manuscript to be sent for peer review with no further delay.

Finally, the editorial team decided to share our findings “with the statistician who reviewed the [original] paper, and also with the original authors to seek both of their responses.” We received both responses, which each downplay the error and make claims that we disagree with. The journal recommended that we modify our full manuscript into an 800-word letter to publish with the authors’ reply, which will not allow us to fully communicate our methods and reanalysis. 

Detailed correspondence can potentially inoculate authors and readers against future errors, and should be allowed by journals. So we have decided to post our fully laid out arguments as a preprint – not yet published – along with the word limited letter that we will submit to the journal. 

Colby Vorland

This was not the first time members of our group had had a frustrating experience trying to correct the scientific record. For more than a decade, we have reported countless errors in the statistical analyses of cRCTs, also called group-randomized trials, in obesity and nutrition journals.

The underlying logic is simple: in cRCTs, groups (e.g., class, clinic, hospital, cage of animals), not individuals, are assigned to treatments. In other words, it’s possible that interactions among members of the same group could affect how they respond to the assigned treatment. In statistical language, it means that the model residuals are no longer independent, which is an assumption of ordinary least squares and many other methods and this non-independence must be accounted for during analysis. Yet, this is too often ignored and results are analyzed as if participants were randomized individually, resulting in misleading conclusions about the effectiveness of the intervention (i.e., too small confidence intervals and p-values). 

When we identify such errors, we often report them to journal editors – and have wildly varying experiences.

Editors sometimes lack the training, courage, or both to properly handle situations in which readers raise serious scientific questions about the validity of the published content in their journals. Take our experience with eClinicalMedicine. The editorial team seems to be unprepared to handle the situation. We needed to coach them onto a more appropriate path. 

There was also lack of willingness to even follow the journal’s own policy that instructs editors to publish corrections in a timely manner when there are factual errors or an expression of concern when serious scientific questions have been raised and “external investigations” are ongoing. None of these happened in this case. Our reanalysis was reviewed by the original authors and the statistician who initially missed the errors in the previous review. 

This lack of training and courage will likely substantially delay readers from learning that the original analysis and conclusions are unsupported, and there is still no promise that the corrected analysis will be published. 

Lilian Golzarri-Arroyo

We find that our experience in this case is more the norm than exception. Correcting the scientific literature is a slow and frustrating process. This is often true even if editors seem to be taking errors seriously and are determined to take corrective actions. 

In another example of a misanalysed cRCT, this one in PLoS ONE, we attempted to reanalyze the data (which were publicly available) correctly. In doing so, we determined that using the same statistical procedures that the authors used, the results were irreproducible. 

We communicated discrepancies and invalidity of the analyses with the editorial team and the original authors. The authors acknowledged discrepancies in their published results, and the editorial team seemed willing to take corrective action. We were told that the journal “takes seriously the concerns [we] have raised about the correctness and scientific validity of the published article”, “abides by guidelines set forth by the Committee on Publication Ethics (COPE)” and that corrective steps will be taken to correct the published record where appropriate. 

Yet, almost a year has passed from our initial communication with the editorial team, and no public acknowledgement of errors is yet available to readers.

Xiaoxin Yu

Editors have a critical role in maintaining the rigor and integrity of the scientific record. Yet, in some cases, they act passively when concerns are raised about the published content in their journals, or as though they are members of the ‘defense’ team, to protect the journal and authors. 

We have experienced cases of passive editors particularly when we attempt to acquire the raw data underlying published results of a potentially erroneous paper from the authors and involve the editorial team in the communications. In two recent cases, despite the publisher’s policy that mandates availability of the data for non-commercial purposes, the authors have not shared their data after our repeated requests. 

In one case, we never heard from the editor even though they were included in our protracted communications with the authors, and in the other, the editor only commented that they cannot mandate availability of the data, despite the journals’ requirement that data be available to publish there. More courage is needed.

Part of the problem is that editors often seem to have a difficult time delineating the difference between unequivocal errors – errors that are just plain wrong – and matters of scientific debate that do not necessarily merit correction. Our group has termed this notion the ‘second demarcation problem.’ 

Errors need to be fixed. They should not merit a protracted negotiation with original authors over whether or how to correct them. Editors have tools to alert readers that results are unreliable through retractions or public notes. Neither should fixing errors require the traditional back-and-forth letter exchange of scientific discussion, unless the response is to acknowledge the error and its correction. Yet, such occurrences typify our experience. 

Under the banner of the demarcation problem is a subtler distinction – that of an error in methods vs. an error in conclusions. Some may argue that if the results generated using incorrect methods are not largely different than those generated using legitimate methods, correcting the errors is not necessary or important. 

This is not true. Incorrect methods may yield correct or incorrect results, but post-hoc appraisal of how the results of incorrect approach compare to those of correct approach in a particular sample does not justify original use of the incorrect methods. Errors in methods (e.g., statistical analysis or design implementation) always merit correction, otherwise astute readers who pick up on the error will correctly discern that the original results cannot be relied upon because the methods are invalid. 

Appropriate methods can also yield both correct or incorrect conclusions when compared against a true state of nature that one is trying to discern through scientific inquiry. This is because inherent in statistical practice is the possibility that false conclusions will sometimes occur even if the most rigorous methods are followed. But if appropriate methods are used, error rates of methods can be specified, and conclusions will presumably ever better approximate the truth in the long run.

David Allison

We should note that some editors are indeed professional, timely, and decisive when handling errors. The editorial team of Diabetology & Metabolic Syndrome, a Springer Nature title, wrote to us after we raised concerns that “the journal data policy enforces access to full data upon reasonable request and your request seems rather reasonable. Springer has a Research Integrity Group and I should get their advice on what we should do if the authors continue to refuse to provide access to the full datasets, to make sure we’ll be following the best practices in research integrity.” 

Our correspondence with this editorial team is ongoing and so while the errors have not yet been resolved, we salute them for their commitment to their journal’s policies on research integrity.

Improving and standardizing editorial practice when an apparent error is pointed out may help minimize the stigma associated with correcting errors. Normalizing error correction would eliminate the perceived animosity between the group whose paper is being critiqued and the group offering the critiques, making the situation less complicated. Authors should not be penalized if they make honest mistakes that arise from otherwise rigorous work. 

It’s important to note that most of the time, mechanisms such as data availability enforcement policies, or instructions for editors to handle the situations where unequivocal errors have been detected, exist. The editorial teams, however, often seem to be unaware or ignorant of instructions, or lack adequate guidance in how to implement them. All these warrant more investment in training editors and creating more practical tools to facilitate editorial adjudication in post-publication error correction.

Those reporting errors should be rewarded through some form of public acknowledgement to incentivize the self-correcting ideal of science. Making mistakes is part of being human, and as we have noted elsewhere, we should build mechanisms where as much as possible “gatekeeper functions create circumstances in which people have no choice but to do the right thing.” 

Jasmine Jamshidi-Naeini and Colby J. Vorland are postdoctoral fellows at Indiana University School of Public Health in Bloomington, where Andrew W. Brown is an assistant professor, Roger Zoh is an associate professor, and David B. Allison is dean.  Xiaoxin Yu is a doctoral student at the school, and Lilian Golzarri-Arroyo is a biostatistician at the school’s Biostatistics Consulting Center.

Like Retraction Watch? You can make a one-time tax-deductible contribution by PayPal or by Square, or a monthly tax-deductible donation by Paypal to support our work, follow us on Twitter, like us on Facebook, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].

5 thoughts on “Courage and correction: how editors handle – and mishandle – errors in their journals”

  1. This is an issue for ordinary multicenter RCTs as well. Potential site effects are often not addressed in the analyses even though the methods to do so have existed for decades.

  2. This important piece made me think of an editorial a quarter of a century ago by Stephen Lock, a former editor of the BMJ, in which he pointed out that most editors of scientific journals are “amateur editors.” https://www.bmj.com/content/310/6994/1547 They are usually distinguished scientists or clinicians, but they have little or no training as editors. Nobody would think of making an editor a cardiologist overnight, but the opposite is routine.

    Lock argued that it was time to professionalise the editing of scientific journals, but it hasn’t happened. We know that publishers of scientific journals make substantial profits, but most do not choose to invest in training editors. Indeed, they often don’t even pay the editors.

    Competing interest: RS was the editor of the BMJ when Lock’s editorial was published. He was also chief executive of the BMJ Publishing Group, which published other journals. The Group did pay the editors and provided some training. RS was on the panel that investigated the Pearce affair, about which Lock was writing his editorial.

    1. And whatever training some editors do receive does not appear to be sufficient:

      Wong VS, Callaham ML. Medical journal editors lacked familiarity
      with scientific publication issues despite training and regular
      exposure. Journal of Clinical Epidemiology 2012;65(3):247–252. doi:
      10.1016/j.jclinepi.2011.08.003

    2. Hi Richard, in my experience most of the large publishers (eg SpringerNature, Wiley etc) have committees that appear to largely drive the process of assessing publication integrity concerns. A number of editors have told us that they have had made their decision about a publication integrity issue but this has to be approved by the publisher’s research integrity committee. The committee often seems to decide not only the final decision (including overruling the editors’ decision on occasion), but the specific wording of any notice. Like many other aspects of the process of assessing publication integrity, these committees are shrouded in secrecy, lack any transparency, and appear to function extremely slowly. Finding contact details for publisher research integrity staff can be very difficult, and even if you find them, emails are often simply not acknowledged or responded to.

      If our experience is the norm, more training of editors won’t help because it is the publishers who are ultimately controlling the process. I’d like to see a lot more transparency, consistency, and speed in the process. At the moment, the process is the opposite: slow, opaque, inconsistent and ultimately often does little to protect the integrity of the scientific literature.

  3. This is an extremely well written piece for which I can only congratulate the authors. It also “rings close to home”. I have had similar experience with writing letters to the editor. When authors are confronted with incorrect methods or interpretations of their papers, they sometimes twist the facts very badly, just in order to defend their case. This has happened multiple times to us. We once submitted a letter to the editor, trying to correct the scientific record, and in the response letter, the authors twisted the facts so badly that they misquoted no fewer than 4 of their cited references (i.e. stating that the references contained something that they did not), just in order to maintain their case. The unfortunate thing is that journals mostly do not allow responses to responses to letters to the editor, which means that the discussion stops here. Thus, the original authors know that they can get away with such twisted responses. I can also empathise with the authors’ distinction between errors that are just plain wrong and matters of scientific debate. I once read a political commentary article that said “It is as if one side is arguing 2 + 2 = 4 and the other side is saying that 2 + 2 = 5, and we have to act as though both arguments are equally valid”. In addition to the notion that in clinical research (RCTs) there can be errors in methods versus errors in conclusions, I would add that there can be errors in underlying assumptions at the outset of the trial. That may happen when the matters of investigation simply do not behave biologically or clinically as assumed by the authors. This is what we were dealing with.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.