Elsevier journals ask Retraction Watch to review COVID-19 papers

Ivan Oransky: Not a COVID-19 expert
credit: Elizabeth Solaka

At the risk of breaking the Fourth Wall, here’s a story about peer reviews that weren’t — and shouldn’t have been.

Since mid-February, four different Elsevier journals have invited me to review papers about COVID-19. Now, it is true that we will occasionally review — often with our researcher, Alison Abritis — papers on retractions and closely related issues. And at the risk of creating more work for ourselves, we often wonder who exactly reviewed some of the papers we see published, given how badly they mangle retraction data. 

These manuscripts, however, had nothing whatsoever to do with retractions. In case you need evidence, here it is:

  • “COVID-19 during pregnancy: Challenges, management and mitigation strategies,” submitted to the Journal of Infection and Public Health
  • “ZnO-based surface photocatalyst disinfectant of COVID-19: A sustainable, efficient and economic technique in higher-risk areas and crowded-spaces,” submitted to Environmental Challenges
  • “COVID-19: socio-economic impacts and challenges in the working group,” submitted to Heliyon
  • “Nonlinear Combinational Dynamic Transmission Rate Model and Its Application in Global COVID-19 Epidemic Prediction and Analysis,” submitted to Results in Physics

Why was I invited to review these manuscripts that don’t even share a solar system with my extremely limited expertise? And no, a six-week clerkship in obstetrics and gynecology during medical school does not an expert on pregnancy make.

To hear the editors tell it, in one case, “Because of your substantial expertise related to the manuscript listed above.” In another, “I believe it falls within your expertise and interest.” Another: “Given your expertise in this area…” (Despite the false flattery, I declined all of these invitations, with brief notes pointing out how inappropriate the requests were.)

It is far more likely that the real reason I was invited to review these manuscripts is that I was a co-author — along with my Retraction Watch colleagues — of a letter to the editor about retractions of COVID-19 papers. That means an algorithm somewhere picked up a reference to the paper in PubMed or another database, and decided I was an expert in COVID-19.

As researcher Jim Woodgett — who no doubt receives orders of magnitude more review requests than I do — put it:

Algorithms gonna algorithm. And at a time when peer reviewers are being bombarded with even more requests than usual, it’s understandable that editors would want to deepen their pools of referees. But last I checked, journal editors were supposed to be experts who made judgments, not robots who pressed “yes” without even reading an algorithm’s recommendations.

It’s that kind of laissez-faire approach that gave us fake peer review. Editors weren’t bothering to check whether suggested peer reviewers were who the authors said they were. I could very well accept these invitations, recommend acceptance of all these manuscripts, and see whether journal editors were paying attention.

I won’t, of course. But please tell us again how today’s journal peer review system consistently adds value.

Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on Twitter, like us on Facebook, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].

12 thoughts on “Elsevier journals ask Retraction Watch to review COVID-19 papers”

  1. It’s incredibly dangerous for any journal to be completely reliant on an algorithm to invite reviewers (especially when combined with author suggestions). AI can be helpful to Editors when they’re looking for potential reviewers, but they should ALWAYS be screened and vetted by knowledgeable humans. ALWAYS. Not only for expertise matching, but also to look for potential COIs, check publication/reviewing records, validate identity (no, ORCID isn’t enough) etc. Just like we’ve seen with plagiarism detection software, these tools can be incredibly helpful in making peer review more efficient and robust, so there is a place for them, but they must be used in combination with experts who can interpret the results and choose to act (or not) on the suggestions made.

    Time spent at reviewer selection is time saved later in the process when ten people have declined and you’ve to go back to the pool to reselect. That serves no-one – not the authors, not the reviewers and not the Editors. Editing is a skill – it takes time, knowledge, and subjectivity.

  2. “But please tell us again how today’s journal peer review system consistently adds value.”

    Well, it certainly dosn’t if editors can’t be bothered with making reasonable & informed choices with regard to potential reviewers. But no editor is obliged to go with whatever the software tells them. As an editor, I virtually NEVER use any of the individuals the software algorithm suggests. Why not? Because the algorithm is so lousy that it will pick each & any person whose recent publications bear a superficial resemblance to the topics of a newly submitted paper. The results are often hilarious if I take the time to review what’s on offer. But most of the time I don’t have the time and get down to the laborious business of personally identifying impartial reviewers with relevant expertise to review the paper.

    It’s like with all those scientometric indices: You CAN hire a person with the highest h index. Or you can take the time to read candidates’ papers. In both cases — picking the right reviewers and picking the right candidate — results will greatly differ between the numbersome approach and the cumbersome approach. In virtually all cases I’ve seen so far, however, the latter approach yields better results.

    1. …and certainly you’re aware that your current post will provide even more fodder for those algorithms? Ivan Oransky will be inundated with further requests for him to review COVID papers!

  3. As someone with a lot of experience on systematic review methodology, but not necessarily on the topics of said reviews (that’s why we have co-authors, folks!), I am getting more and more requests clearly fed by AI. I’m beginning to see it as a scourge.

  4. Results in Physics?

    Will Ivan agree with me that one look at the editorial board reveals it all?

    Hint: Abdon Atangana.

  5. I’m surprised that you’re surprised by this. I frequently get requests to review manuscripts for which I lack the relevant expertise. I just presumed everyone did.

  6. “But please tell us again how today’s journal peer review system consistently adds value.”
    I take this to mean that peer review is fine (or at least, we have not identified a better mechanism), as long as the people who do the reviewing are able to judge the work under review. It might be time to think about a (NIH)-led initiative to train and advertise a large cohort of scientists to act as reviewers that could be tapped by Journals. That would avoid many issues surrounding peer-review, including lack of representation of women and URM scientists, friends reviewing each other’s papers, long delays until editors find reviewers etc.

  7. As I commented also on Twitter, my paper was ‘reviewed’ by a MD, who is not a researcher and has only one “publication” (i.e., a blog on the journal’s webpage read by millions), which we criticized because of the careless citation of an alleged finding from a secondary source. This reviewer is the founder of an advocacy group which holds views that are not supported by research. We appealed to the editor; however, the journal maintained that the reviewer had enough expertise to review our paper and did not agree that the reviewer had a serious conflict of interest.

  8. I consistently receive requests to review oncology papers – well outside my area of expertise. However there is another academic who shares my name and is a prominent researcher in oncology. We share a relatively uncommon name. I can’t imagine what it is to be Pham Nguyen, or John Smith, or Seo-yoon Kim, etc.

  9. We are not being surprised today to get the review requests without having any scholarly expertise in the relevant field. It is equally strange for me, quite often the Scholars of anonymous fields have been receiving stunty requests and many of them (perhaps) did those reviews during the Lockdown leisure; which is far beyond the research ethics – results in the need for retraction watch.

    {Jiban K. Pal, ISI Kolkata)

  10. My personal views are that the standards of peer review have lowered nowadays as compared to around 10 years back. Whether it is an AI based selection of reviewers or any other mode, the expectations and outcome from handling editor are clear in most of the cases. I link this phenomenon with the sudden increase in the number of journals, especially the newly launched sister journals. Arranging articles for these publishing ventures is indeed difficult but managed smartly by the esteemed editors of other associated journals.
    I also receive several review calls where my limited expertise is considered appropriate, but often I see that many many submissions from around me are returned back under “Article transfer” services. The reversals are decorated with standard language from the editorial office/editor to make up the decisions.
    Overall, the sanctity of publishing is lost.

  11. I get NUMEROUS requests from Elsevier journals to review papers with nothing whatsoever in common with my research expertise, which I assume is due to some sort of AI algorithm. I just ignore them.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.