On March 7, a Sage journal published an expression of concern for an article on cases of myocarditis in people who had received a COVID-19 vaccine.
“The Editor and the publisher were alerted to potential issues with the research methodology and conclusions and author conflicts of interest” and had undertaken an investigation of the article, the notice stated. According to one of the authors, the investigation involved two new peer reviews of the paper.
We’ve reported on many cases of authors disagreeing with retractions other publishers issued after conducting post-publication review processes. The papers often involve hot-button issues – pesticide poisoning, the effect of vaping on smoking rates, an estimation of deaths from the use of hydroxychloroquine early in the COVID-19 pandemic, and President Trump’s role in spreading vaccine misinformation on Twitter before the company suspended his account.
The apparent necessity of post-publication peer review for some papers raises questions about how well publishers ensure the quality of the peer review process and editorial decision making, and how the companies respond to scrutiny.
In a world awash with flawed papers, which ones earn a post-publication review from a publisher? A spokesperson for Sage said a paper’s topic does not play a role in the decision to commission a post-publication review, but a sleuth argues it should.
The COVID-19 vaccine myocarditis article appeared in Sage’s Therapeutic Advances in Drug Safety on January 27, 2024. It has been cited seven times, according to Clarivate’s Web of Science. In 2021, an earlier version of the article had been published briefly in an Elsevier journal, then removed.
According to a blog post from Jessica Rose, one of the authors of the paper, the two individuals who reviewed the published paper requested dozens of corrections. That appears to contrast with the two original reviewers, who “provided a single sentence review with no substance,” a Sage spokesperson told us. In her blog post, Rose said she and her coauthors had revised the paper “as per the reviewers’ helpful comments and suggestions” before it was published.
“This is not how the peer-review process works, at least, it never has been in the past,” Rose wrote in her post. “You cannot re-review an already accepted and published paper based on post-publication ‘complaints/whining/musings’ and expect the authors to re-write it to this degree (52 detailed points?!) after a year has passed.”
If the article had so many problems, they should have been addressed in the original peer review process, not a year after publication, Rose wrote.
Rose attributed criticism of her paper to “trolls,” and wondered “if this is an attempt to purge the paper once again from the eyes of the public.” Rose did not respond to our request for comment.
An internal managing editor “now assists in ensuring high standards of peer review are met” at the journal that published Rose’s paper, a Sage spokesperson told us.
“As scholarly processes aren’t perfect, like all publishers, we must continually refine our practices to make meaningful, long-term improvements,” the spokesperson said. “This means learning from cases like these and making process changes in response, giving guidance to editors while respecting editorial independence, and taking corrective action when necessary to preserve the integrity of the scientific record.”
Other researchers have accused Sage of conducting post-publication peer reviews with a foregone conclusion: retracting their work.
Last October, the authors of three papers about abortion that Sage retracted after conducting post-publication peer review sued the publisher, alleging the retractions were “discriminatory.” A federal judge cited two of the articles in his 2023 decision to suspend approval of mifepristone, a drug used in medical abortions.
The authors sought to compel Sage to enter arbitration with them, and appear to have prevailed.
In their demand for arbitration, lawyers for the authors allege the post-publication review was “improper, unscientific, and performed with the express purpose of justifying retraction.” (The demand also refers to the lead author’s entries in The Retraction Watch Database, claiming other editors will find those and not want to publish his submissions.) The publishing agreement the authors signed with Sage did not authorize such a review, and Sage did not give the authors an opportunity to read and respond to the reviewers’ full comments before deciding to retract the articles, the authors allege.
Sage wanted to retract the articles because the publisher “did not want to provide a platform for researchers whom it perceived to be pro-life or advancing pro-life views or causes,” the authors allege. Sage “applied inconsistent retraction standards” to their work based on the publisher’s perception of them as pro-life, the authors argue, in violation of California’s anti-discrimination law.
In light of these recent, high-profile cases, we asked Sage a few questions about how the publisher handles post-publication peer review — not just an internal evaluation of concerns raised, but with additional reviewers recruited to assess an article in its entirety.
Sage undertakes such reviews “when substantial concerns about the quality of an article are brought to our attention, the original review process did not meet our standards as outlined by the Committee on Publication Ethics, and if there are no concerns regarding fabrication of identities or indicators of paper mill activity,” a Sage spokesperson who declined to be named told us.
Based on those criteria for initiating a review, the number of papers to get such treatment varies each year, according to the spokesperson.
Sage asks reviewers “to consider whether the methodology is appropriate, if the study is grounded in the literature, and if the conclusions are supported by the results,” the spokesperson said.
The topics a paper addresses do not play a role in deciding to initiate a review, the spokesperson said. When we asked for an example of non-controversial papers that have received post-publication peer review, the spokesperson sent us a November 2023 retraction notice for 90 papers from the International Journal of Electrical Engineering & Education. The original peer review process for the articles did not meet Sage’s standards, according to the notice, and post-publication peer review “highlighted fundamental concerns within the articles.”
The volume of papers retracted from a single journal, plus the language of the notice, suggest paper mill activity, which would indicate a completely different category of apparently substandard peer review than the other recent cases. However, the Sage spokesperson said the publisher identified “a range of issues” in papers from the journal and re-reviewed those “with quality and peer review concerns that did not display indicators of paper mill activity.”
James Heathers, a scientific sleuth, argues that a paper’s topic should play a role in the decision to conduct a post-publication review.
“The societal and scientific implications of any given paper vary wildly from ‘of absolutely no consequence whatsoever’ to ‘an active threat to human life,’” he said. “Anyone treating both of those the same is waiving necessary editorial discretion and wasting resources!”
Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on X or Bluesky, like us on Facebook, follow us on LinkedIn, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].
James Heathers is absolutely right.
No she isn’t. Science advances from free sharing of research. The scientific process of rebutting another’s research is to publish your own.
While I think both Rose and Heathers have their own valid arguments, the following is a real issue too:
“Sage undertakes such reviews ‘when substantial concerns about the quality of an article are brought to our attention, the original review process did not meet our standards as outlined by the Committee on Publication Ethics, and if there are no concerns regarding fabrication of identities or indicators of paper mill activity,’ a Sage spokesperson who declined to be named told us.”
As it stands, particularly IEEE and Springer (and perhaps Wiley and Elsevier too but to a lesser extent) have been absolutely bombarded with paper mills from circa 2023 onward. Many of the papers in question are associated with what are known as tortured phrases. I’d guess the mills are now using LLMs and subsequently so-called spinners for trying to hide their tracks.
IEEE and Springer should really do something; it shouldn’t really be that hard. The tortured phrases are increasingly polluting the literature, and thus it may be that some innocent authors will pick up some of the phrases out of ignorance. Furthermore, the mills’ outputs are eaten by LLM-training bots, meaning that the polluting extends beyond science.
I am confused.
The article in question CLEARLY STATED that what was found was a correlation, not a causation.
It stated, in conclusion:
“COVID-19 vaccination is strongly associated with a serious adverse safety signal of myocarditis, particularly in children and young adults resulting in hospitalization and death. Further investigation into the underlying mechanisms of COVID-19 vaccine-induced myocarditis is imperative to create effective mitigation strategies and ensure the safety of COVID-19 vaccination programs across populations.”
It specifically states the need for further study to determine what role, if any, the vaccine played in higher myocarditis diagnoses.
Such bluntness should exist in publications, more than we now find.
How does this, in any way, reflect a poor peer review?
Does that not reflect more on the credibility of authors that chose cherry pick quotes to further their hypotheses, rather than the author who identified an association, yet noted the plausibility of other causes?
Correct.
You state that “It specifically states the need for further study to determine what role, if any, the vaccine played in higher myocarditis diagnoses.”
The obvious problem is that VAERS cannot be used to look at frequencies. That’s where the article falls apart, and it is something that should have been noted in any competent peer review.
Totally agree with you. However, many journals had really skimpy peer review during the pandemic and a lot of really questionable studies were published using the VAERS database. Not all have been wiped from the record.