Tortuous and torturous: Why publishing a critical letter to the editor is so difficult

Often, when confronted with allegations of errors in papers they have published, journal editors encourage researchers to submit letters to the editor. Based on what we hear from such letter writers, however, the journals don’t make publication an easy process. Here’s one such story from a group at Indiana University: Luis M. Mestre, Stephanie L. Dickinson, Lilian Golzarri-Arroyo, and David B. Allison.

In late 2018, in the course of reviewing papers on obesity, one of us (DA) noticed a November 2018 article in the BMC journal Biomedical Engineering Online titled “Randomized controlled trial testing weight loss and abdominal obesity outcomes of moxibustion.” The objective of the study was to determine the effect of moxibustion therapy on weight loss, waist circumference, and waist-to-hip ratio in young Asian females living in Taiwan.

Some of the tabulated data in the paper seemed odd, so DA sent it to members of our research team asking for their input. The research team agreed, finding some irregularities in the data that seemed inconsistent with a randomized experimental design. After that, the task of carefully and thoroughly checking the published summary statistics and text in the paper was delegated to another of us (LM) and all of his work rechecked by professional statisticians and the research team.

The apparent inconsistencies and anomalies identified in the paper (i.e., large baseline differences, variance heterogeneity, and lack of details in the explanation of the study design) led to concerns about the extent to which the study report represented an accurate description of a properly conducted randomized controlled trial (RCT) and, therefore, whether the conclusions were reliable. Given the importance of reliable conclusions in the scientific literature on obesity treatment, as well as simply the integrity of the scientific literature overall, we decided to write a letter to the editor of the journal seeking either clarification or correction.

It was there that we met our first potential hurdle. When we began the process of submitting the letter to the editor in early February 2019, we learned that the journal requires a fee of about 1,000 euros for all publications.  Fortunately, when we sent the journal our team’s recent work on the dangers to the scientific process of requiring fees for corrections, the journal granted us a fee waiver. We submitted the letter in mid-February, and the journal replied in early March to say that it was “potentially acceptable” after revisions.

Meanwhile, we approached the corresponding author via e-mail in late March 2019 — something we think is often (but not always) a useful and appropriate part of the correction process in science. We stated our concerns and requested the de-identified data to replicate the author’s analyses. However, they did not reply at first. We sent a follow-up email to the authors in mid-April 2019, upon which the author asked us for more details about our concerns, which we provided. However, they never sent the requested data. We ended up contacting the authors three times. .

We completed the revisions and re-submitted the letter in early June 2019. Then we ran into another hurdle, or at least a delay. In late June, the journal replied that the editor was happy to publish our letter, but that they had also contacted the authors to invite them to provide a reply to our letter that would be published together. After having difficulties reaching the author, the journal emailed that they received a response from the authors in July 2019 regarding the concerns we raised and that the journal would keep us informed of the next steps.

Then we waited.

Finally, after six months of limited correspondence, our team followed-up in mid-January 2020, at which point the journal indicated that they decided to retract the paper. But here, again, there was a hurdle. The journal said that an independent biostatistician had confirmed that data and results were unreliable — and also found further problems with the paper. The journal then retracted the paper on January 24. But they had decided not to publish our letter.

The journal said that retracting the paper and then publishing the letter would be redundant. We objected, saying that publishing the letter was important because it would remind readers of the potential errors and inconsistencies that can occur if randomization is not carried out properly. (Failing to publish the letter would also deprive the team of authorship of a paper, which, whether we like it or not, is how credit is apportioned in science.) After a phone conference, the journal ultimately agreed. 

The letter was officially accepted on February 6, 2020, and published on February 18, 2020. That meant it took 60 weeks — more than a year — from the time we identified the errors in November 2018 until the letter was published.

Reflecting on this experience as well as our overall experience in detecting and correcting errors in the scientific literature, several take-aways come to mind. 

For those who choose to offer critiques and try to correct the scientific literature, as others have also pointed out, this is often a fraught, slow, and frustrating process. Authors and editors alike sometimes respond graciously, professionally, promptly, and with integrity. Unfortunately, such positive responses are not, in our experience, the norm. More often critiques are responded to slowly if at all and when responded to, with some combination of dismissal, obfuscation, sophist responses, and ad hominem responses. Thus, as others have noted, if one is going to enter this arena, one needs to be forearmed with patience and hunker down for a long haul.

Karl Popper introduced the now famous “demarcation problem” defined as making the distinction between science and pseudoscience (or sometimes phrased as non-science). We believe there is now a closely related but somewhat different demarcation problem with which editors struggle: The demarcation between reasonable differences of opinion about optimal study techniques and unequivocal errors. 

Some things are (or should be) clear. If one investigator inadvertently makes a simple arithmetic calculation error, once the error is detected and noted, there should be no dispute. Errors can also be absolutely unequivocal, even though they are far from simple and easy to detect or discern.

Perhaps one of the best-known examples is the famous Monty Hall problem, about which there is, to our knowledge, no dispute among populists and mathematicians, but which has stymied even professional mathematicians. The game show “Let’s Make a Deal” with Monty Hall often had a game that consisted of three doors, in which only one of the doors contained a prize. The contestant would attempt to guess which door contained the prize. After the contestant’s first choice of one of the doors, Monty Hall would open another one of the doors revealing no prize behind the second door and ask the contestant if they want to stay with their first selected door or switch to the third door.

Most people think it does not matter which they choose because there seems to be a 50:50 chance of the prize being behind the either of the unopened doors. However, mathematicians have proven that the contestant has a 2 in 3 chance of winning the prize if they switch doors, perhaps unintuitively. 

In statistics, there are also some things that are unequivocally and detectably incorrect. For example, as pointed out by others, when an investigator reports a test statistic, its associated degrees of freedom, and its P value, there is a necessary correspondence among those three quantities. If the quantities reported do not satisfy this necessary correspondence, then something is explicitly wrong – – plain and simple. 

But what about choice of analysis? We often find ourselves wrangling with editors and authors when we assert that certain analytic procedures are just plain unequivocally wrong for the context in which they are used. But these are far from simple and do not involve confirmation by simple lookups of information in tables nor easy arithmetic demonstrations. Understanding these errors relies on specialized knowledge, sometimes arcane jargon, and fundamental understanding of statistical principles. Clearly, not all editors can be expected to be experts on all such points. 

We perceive that editors are therefore often stuck in an endless loop in which the critique offeror repeatedly says that what has been published is just plain wrong and merits correction, and the authors repeatedly assert that it is a matter of opinion as to which method is most appropriate. Editors seem ill-equipped, unprepared, and in many cases simply unwilling to courageously take decisive steps in such situations to determine if something is just plain wrong, and often default to delays, publishing nothing, or stating that they will publish a letter and the response and let the readers decide.

Unfortunately, this is not helpful and does not correct the record, and many (but not all) readers are also likely ill-equipped to make these determinations on specialized topics that are outside their primary area of expertise. We see this is a major problem for the scientific community in general and believe that establishing infrastructure for editors to rely upon to handle such situations will be essential. These may be bodies of highly professionalized consultants that they can draw on, including but not limited to experts in statistics and quasi-authoritative bodies like the Committee on Publication Ethics. 

Regardless of the infrastructure established, perhaps the first most important step is for the scientific community and editors to recognize that there is a distinction between differences of opinion about optimal procedures and simply incorrect procedures and to be prepared to take appropriate action.

As we have noted elsewhere, editors seem to be key linchpins in maintaining the rigor and integrity of the scientific peer-reviewed literature. We have had the pleasure of seeing many editors take thoughtful, professional, measured, timely, and decisive steps to adjudicate errors or apparent errors pointed out in their journals. 

Yet this is often not the case. In some cases, editors seem to act as though they are members of the ‘defense’ team, seeking primarily to protect the journal and the authors of the original paper from any embarrassment or any need to correct something. In other cases, such as the one here, the editors acted with consistent rationalism, but seemed to approach the matter, at every step in the process, as though it were a new experience and seemed to both need and seek coaching from us as much or more than they provided guidance.

We believe that such well-meaning editorial teams who are, unfortunately, not trained in handling these matters well and not buttressed by an organized infrastructure should be supported by new plans for training and new infrastructure. This may be a key place to provide resources that will be helpful to all.

Building on ideas from other domains, one might envision the original authors of the paper being critiqued as potentially containing errors as “blue team” and the team offering the critiques as “the red team.” Many have pointed out that there is an inherent problem in science in that investigators are rewarded for exciting, interesting, and statistically significant findings and that this can lead to undesirable behaviors to produce those findings and not admit mistakes when they are pointed out. This is undoubtedly true. 

Yet similar criticisms about “gaming” of the system can be made about any human activity involving rewards. No one suggests we stop rewarding runners in races for completing the race the fastest. Instead, we disincentivize failure to comply with fair rules in other ways while simultaneously continuing to incentivize pursuit of the fundamental goal. Moreover, we incentivize referees and other parties to help maintain the integrity of the system. 

So too in science, we believe that we should continue to reward those who conduct important and exciting research successfully, but we also believe we should begin rewarding those who devote their time, acumen, and energy to helping to maintain the integrity of the system. 

When the editors first accepted our letter to the editor for publication and then decided not to publish it because they were going to retract the original paper, it denied our team, including a team led by a graduate student, the credit for their work. We objected strongly to this and had to wrangle further with the managing editor of the Journal to have this decision reversed. So too in other situations have we had to negotiate with journal editors, sometimes successfully, sometimes not, sometimes easily, sometimes not, to provide us what we perceive to be reasonable credit for our work. Certainly, some critics may choose to not be publicly acknowledged, and that may be apt in some situations. Nevertheless, if we do not reward efforts to maintain the integrity of the scientific literature and to do it professionally and civilly, then it is unlikely that such efforts will flourish. 

Finally, while we do not agree with disincentivizing pursuit and attainment of exciting and important findings, we do believe that the blue teams should be “de-disincentivized” from admitting and correcting errors. As others have pointed out, we need to destigmatize honest error correction to the greatest extent possible. This may result in more authors responding to requests, responding rapidly, and responding in a way that is honest and forthcoming.

That would be good for science.

Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on Twitter, like us on Facebook, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].

3 thoughts on “Tortuous and torturous: Why publishing a critical letter to the editor is so difficult”

  1. Great article. Another example of this is when we tried to publish a commentary (a criticism) on a published paper that had obvious flaws. The journal said that all such commentaries have to be submitted within six months of the original article to be considered. Beyond the obvious flaws in this argument, it felt like the editors just made this up as a policy ​— the journal home page stated that it welcomed such commentaries, but nowhere did it mention a time limit.
    I believe PeerJ (www.PeerJ.com) have come the closest to dealing with this problem. Their articles are “live” in terms of anyone is able to place a comment on any paragraph of a published PeerJ article ​— this commentary gets its own DOI. It is then available for the authors ​— or others ​— to respond to. Irrespective of whether the authors respond or not, it is part of the record for other readers.

    1. Their articles are “live” in terms of anyone is able to place a comment on any paragraph of a published PeerJ article ​— this commentary gets its own DOI.

      It’s OK; as originally designed, the system was intended to be mathematically uncountable.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.