When editors confuse direct criticism with being impolite, science loses

Jasmine Jamshidi-Naeini

In January 2022, motivated by our experience with eClinicalMedicine, we wrote about mishandling of published errors by journal editors. We had noticed that the methods used for the analysis of a cluster randomized trial published in the journal were invalid. Using a valid approach, we reanalyzed the raw data, which were shared with us by the original authors. The trial’s results were overturned. 

As Retraction Watch readers may recall, we subsequently submitted a manuscript describing why the original methods were invalid, what a valid analysis should be, and our results after conducting a valid analysis. After an initial desk rejection “in light of [the journal’s] pipeline” and further exchanges of correspondence, the journal shared our findings with the statistician involved in the original review and the original authors and sought their responses. 

After receiving the responses, both of which we thought contained factually incorrect statements, the editorial team eventually suggested that we summarize our full manuscript as a 1000-word letter for submission to the journal. We did not agree that a letter would allow us to fully communicate our methods and reanalysis. Thus, to meet the journal’s word limit while fully laying out our arguments, we posted our additional points as a preprint and cited the preprint in a letter we submitted to the journal.

It was then that we met another roadblock to correcting the literature.

Colby Vorland

We received the first revision request from the editorial team with suggested tracked changes to the wording of our letter. We were asked to remove any wording referring to the original analysis as “invalid” or “incorrect” and any wording referring to our reanalysis as being “valid,” “correct,” or “legitimate.” Those edits downplayed our reanalysis and the conclusions drawn (which overturned the results of the original analysis), referring instead to “a different interpretation (with a different analysis).”

The editorial team had altered our fundamental message about the incorrectness of the original analysis and conclusions, and the correctness of our reanalysis approach. The editorial team also removed the citations to a link to supplemental information, where we had provided depth and clarity to interested readers, including our statistical code for the reanalysis. 

For two rounds, we tried to meet the editorial team’s expectations by partly accepting their suggested revisions and providing our reasoning as to why other revisions would alter our intended message and thus were unacceptable to us. Our responses were dismissed, and the editorial team requested the exact same revisions each time. It sounded to us like an ultimatum: either publish with the revisions the editorial team wanted or do not publish the corrected results at all. 

While we did not agree with their edits, in the end, we were forced to accept them. The letter was finally published on October 6, 2022. The original paper stands uncorrected.

Andrew Brown

This was not the first time we were asked to revise phrases like “the analysis was incorrect” or “the results were overturned by using a valid analysis” to phrasing like “an alternative analysis showed something different.” Naming a “valid alternative” analysis an “alternative” analysis does not make it clear to the readers that the original analysis was patently incorrect by any reasonable standard of statistical knowledge.

When we identify unequivocal errors in the published literature, we often report them to journal editors. Some journal editors see what we believe to be simply crisp and clear statements about the correctness or lack thereof of analyses as somehow being impolite, unfair, or as a reviewer of one of our manuscripts described, “unnecessarily pejorative.”

 To be sure, there are cases of impolite or worse behavior in science. Michael Lauer, the NIH Deputy Director for Extramural Research, recently shared some true stories about NIH staff and members of review committees facing “inappropriate and uncivil conduct” by applicants. Lauer’s examples involve using aggressive tone, being condescending, or conducting abusive correspondence. All of these are impolite, uncivil, and unfair behaviors. 

David Allison

However, there is nothing impolite or unfair about saying a particular analysis was incorrect, wrong, or invalid and therefore that the conclusions stemming from it are either invalid or unsubstantiated. Editors’ struggle to differentiate impoliteness from directness could partly be related to a notion we call “the second demarcation problem”: Some editors have a difficult time (or are unwilling) to distinguish unequivocal errors from matters of subjective scientific opinion. The former must be corrected, whereas the latter merit scientific debate. 

On the other hand, those concerned with public research criticism to promote rigor, reproducibility, transparency, and trustworthiness in science sometimes interpret the encouragement to be polite and civil as an encouragement to be silent.

Lilian Golzarri-Arroyo

But one can criticize professionally, politely, constructively, and as noted earlier ‘directly’ without remaining silent. We should not remain silent when we see flaws in the research literature. We should engage in dialogue and point out the errors we detect. However, we should not allow the necessity for not being silenced in our critics to be a license for impoliteness or personal attacks

Clear distinction between errors and legitimate scientific debate should not be undermined by the guise of politeness. To gloss over an unequivocal error by not acknowledging its incorrectness or to downgrade a valid reanalysis by calling it an alternative analysis gives the impression that the invalid approach can be considered correct. This corrupts the integrity and trustworthiness of science. 

There is no passive magical process whereby science corrects itself as an anthropomorphized nebulous figure. Upholding the self-correcting nature of science requires scientists correcting the science from within the field of science, and scientists can be polite, civil, constructive, and direct when doing so.

Jasmine Jamshidi-Naeini is a postdoctoral fellow and Colby J. Vorland is an assistant research scientist at Indiana University School of Public Health in Bloomington, where David B. Allison is dean and Lilian Golzarri-Arroyo is a biostatistician at the school’s Biostatistics Consulting Center. Andrew W. Brown is an associate professor at the University of Arkansas for Medical Sciences.

Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on Twitter, like us on Facebook, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].

10 thoughts on “When editors confuse direct criticism with being impolite, science loses”

  1. Very shameful for eClinicalMedicine. This only goes to show the ‘big boys club’ that is ‘elite’ academic journals and publishing circles.

  2. I am both author and editor, though in the less contentious field of astrophysics (where most important journals are controlled by national astronomical societies rather than big press houses), and cannot understand the behaviour you have encountered–that erroneous papers slip through peer review happens all the time, there is nothing to be ashamed of, so what were they afraid of?.
    Also, I cannot understand you: why did you bow to this unacceptable demand from the editors to alter the scientific content of your paper? By doing that you keep such abusive practices alive. If those abused (you) go to other journals, such behaviour should become less popular.

  3. I think there are 2 more likely explanations than the one you provided (i.e., the editors didn’t want to be impolite). It’s possible the editor was friends with at least one of the article’s authors. (Yes, editors are initially blinded, but once an article is published, they know who the authors are). Second, the most likely explanation is that using words like “incorrect” and “invalid” would mean the editor is admitting they made a mistake the first time. Many people do not like to admit they made a mistake. It’s human nature, and the more the ego is involved, the less likely one is to admit a mistake.

    1. “ Editors are initially blinded.” Triple-blind review processes are rare in STEM. If the editor is blind to the authors’ identity, who’s running the reviewer selection process? The publishing houses’ staff employees? They are not experts in the field, they are expert in publishing. If the editors are blinded to the author identities, and they search for authors of previous, relevant studies, they would be at risk of inviting the authors to review their own work. I have often heard publishing critics call for triple-blind review systems because editors have biases, but I’ve yet to hear a logical argument how that would work. I don’t think we’re ready to throw it all to AI.

  4. I do sort of understand from the perspective of the editor, solving these issues can be quite difficult. The editor may not be able to fully understand the nuances of the work if they are not an expert that that specfici niche area of study. That being said, one thing that alarms me (in the previous post) was that the editor brought back the same biostats expert who read the paper the first time around to re-review it, it would make more sense to bring in some fresh eyes.

  5. I would consider this misconduct on the part of the journal editors and/or their editorial board. I’ve worked in scholarly publishing all my life and fortunately have encountered editors and reviewers who always responded appropriately in situations where authors were suspected of misrepresenting the results of their study, whether intentionally or through sloppiness. It’s especially egregious if the study area is clinical medicine, where the actual lives of patients may be affected. There is a code of conduct for editors and publishers, and it’s unfortunate that so many people connected with this publication decided this code did not apply to them. I commend you for taking a stand on this issue and making it public.

  6. I think it’s strange to call your method the final “correct” one when someone else could equally come along and find errors in your analysis one day too. Perhaps saying “updated method/analysis” allows for scientists to see that your method addresses issues and builds on their work but might not necessarily be flawless. Anyone in your field should be skilled enough to do their own peer review on your paper and the other paper and decide based on the available evidence which method has the most merit/utility.

  7. *but leaves space for readers to consider that your method can be improved on and might not necessarily be flawless either.

  8. >We should not remain silent when we see flaws in the research literature.

    Really? As a matter of fact, some monumentally flawed papers, riddled with “unknown sources of variations”, disappearing raw data and null-hypotheses incompatible with the results, go virtually unchallenged by the herd of mainstream good boys and girls.
    https://weirdtech.com/sci/expe.html

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.