Can a tracking system for peer reviewers help stop fakes?

Andrew Preston Credit: Victoria University of Wellington

The problem of fake peer reviews never seems to end — although the research community has known about it since 2014, publishers are still discovering new cases. In April, one journal alone retracted 107 papers after discovering the review process had been compromised. By tracking individual reviewers’ contributions, Publons — recently purchased by Clarivate Analytics — thinks that it can help curtail the problem of faked reviews. Co-founder and CEO Andrew Preston spoke to us about how it might work — and how the site has responded to recent criticism about accessibility to review data.

Retraction Watch: What is Publons doing to help combat the problem of faked reviews? 

Andrew Preston: We see it as our job to build tools to help editors to more efficiently find, screen, contact, and motivate potential reviewers. This is a large problem but by working across the publisher ecosystem (we now have partnerships with 8 of the top 10 publishers) we should be able to make the system more efficient for everyone involved.

Fake reviews occur when the author or a third party subverts the peer review process. In many of these cases the editor believes they are communicating with a reviewer when in fact the email address they’re using is controlled by someone else. This problem usually occurs when the editor uses an email address suggested by the author, but can also happen when searching for email addresses online.

On Publons, reviewers are required to verify both their email address and their reviews. What we have learned from journal editors is that by connecting those two things together, editors can have confidence that the person on the other end of the message is the same person whose verified review record they are viewing on Publons.

RW: We recently covered a sweep of retractions from one journal (107!) published by Springer, which has long been aware of the problem of faked reviews. This suggested there may be a new mechanism by which people are subverting the peer review system. There’s also discussions about the potential role third party companies – which can submit papers on authors’ behalf – may play in manipulating the review process. How can Publons address these problems?

AP: If we want to stop faked reviews we need to provide editors with quick and effective tools that help them find motivated, trustworthy reviewers. I’ve covered the tools we offer but it is worth noting that faked reviews are just one example of a number of issues that are slowing down and disrupting the academic publication process. It’s increasingly difficult for editors to find qualified and motivated reviewers for each of the many manuscripts they receive.

One solution here is to expand the pool of available reviewers. This requires courses to train new reviewers, and to make that training available to researchers who would not usually be asked to review. This may be because of their geographic location or because the editor can’t establish their qualifications elsewhere. We tackle this head on in our Publons Academy. This is a free, practical peer review training course designed to teach early career researchers the core competencies of peer review. They work directly with their supervisor to practice writing real reviews, and upon graduation, we close the loop by connecting them with editors in their field. 

RW: How might Publons’s purchase by Clarivate Analytics help further reduce the problem of fake reviews?

AP: A clear prerequisite for us in the deal with Clarivate Analytics was that it allowed us to remain who we were and to address the problems we’ve been working on at a larger scale. To give you a specific example, Clarivate Analytics is home to Web of Science, the world’s preeminent citation database. By incorporating citation and author data from Web of Science into the tools we offer to editors, we will be able to provide best-in-class conflict of interest reports and suggest a wider pool of potential reviewers.

More generally, one of the key challenges in building solutions to the problems facing peer review is that while everyone agrees they are critical, it’s very hard to bring everyone together to solve them. Clarivate Analytics is a completely neutral player in the research ecosystem — they’re not a publisher, funder, or research institution — but they have extensive relationships with all of the key stakeholders.

We believe that the scale of Clarivate Analytics will help us to coordinate publishers, funders, and institutions to first of all raise awareness of the issues and then build market-leading solutions. A joint effort will improve the situation for everyone.

RW: It seems some Publons users are taking issue with how their review data are being used — and how they can’t access it. Do you have a response to that? 

AP: We take data privacy and handling really seriously. This is particularly important when dealing with peer review: a process subject to a range of policies where anonymity and privacy are paramount. So we make sure to treat data both in accordance with data privacy best-practice and in compliance with individual journal review policies. This two-pronged approach has helped put our data protection credentials at the core of the Publons platform.  

In the one case that ended up on Twitter, scientist Laurent Gatto asked for a download of his raw peer review data. This is not a request we receive often so we don’t have an automated process for it; when we couldn’t instantly provide the data, he took to Twitter, asking to delete his account. I’ve since instructed the team to find a way to do this, and Laurent now has his data. We’re looking at ways to accommodate these requests more efficiently in the future, but it’s more challenging than you would think. It’s not like we have a folder on our desktop for every reviewer with everything packaged and ready to go. The data are spread across many (>10) database models and we have attached various forms of private verification data from editors and publishers that we simply cannot share. 

I do want to be clear that the response we’ve received from this deal has been overwhelmingly positive. Aside from a few misconceptions circulating on social media, almost every researcher, publisher, institution, and funder we’ve spoken with was at first surprised — no-one expected Clarivate Analytics to take the lead in peer review — and then incredibly excited at the potential.

Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our daily digest. Click here to review our Comments Policy. For a sneak peek at what we’re working on, click here.

13 thoughts on “Can a tracking system for peer reviewers help stop fakes?”

  1. I recommend using university websites to access email addresses of the reviewers (when corresponding author of relevant publications does not list an institutional email address — some journals have a format of snail mail addresses only). If corresponding authors use a personal email address on professional communications, then use the gmail/yahoo address. Always validate the suggested names — and I never use PRC reviewers for papers originating in PRC

  2. It is assumed that fake peer reviews were positive and helped papers to get published. What about those reviews that unjustifiably lambast the manuscript or require an impossible set of additional experiments? Do those qualify as fake?

      1. Foe reviews are masked as those requiring an impossible set of additional experiments or using “straw man” strategy in critique. Is evidence about these easily discovered?

  3. Alexander Kraev
    It is assumed that fake peer reviews were positive and helped papers to get published. What about those reviews that unjustifiably lambast the manuscript or require an impossible set of additional experiments? Do those qualify as fake?

    No. The authors can make their case in their response to the reviews, and the journal editor can intervene if the review is clearly unfair–I have had editors simply discard obviously biased reviews. At the very worst case, the paper can be submitted elsewhere. A review that is genuinely by the scientist who signs it is not fake, and calling it such debases the term. Reviews written by a paid shill, or the author, are a much more severe problem for the field.

    1. How about an EIC presenting two reviews from (allegedly) the same reviewer on the same manuscript version that are opposite in opinion?

  4. It [ORCIDs] are not universal by now, this is one more reason to demand every scientist (who wants to be part of the scientific record) to have one.

    That is rather like insisting that every researcher and potential peer reviewer have a FB account.

    1. You cannot compare FB to having an ORCID ID. The problem of fake reviews is very real, and it is high time we deal with it intelligently. Having an unique research ID, can really help circumvent this problem.

      1. The problem of fake reviews is very real
        No, the problem of lazy or over-worked editors asking authors to provide a list of their own reviewers is very real.
        “Fake reviews” only arise because journal editors are inviting total strangers to review manuscripts. If you have never heard of a person before then it is the height of irresponsibility and incompetence to ask them to review a manuscript!

  5. The system as it stands is two tier.
    Tier 1. Reviewers of publication and grants should remain un-named and unknown. This allows scratching of backs for publications and grants. This is the system as it is now.

    Tier 2. Anyone who criticizes published work by “prestigious” groups due to image manipulation, plagiarism or other forms of scientific misconduct should be named, shamed and likely lose their careers.

    I think the time will come when those who allege research misconduct will not be anonymous and risk their careers as they do care about science and the scientific method.

  6. Andrew, is the entity behind J. Impact Factor really ‘a completely neutral player in the research ecosystem’ when everybody in this ecosystems in scrambling to enhance their rating at all costs, leading to at some of the issues perennially discussed on retraction watch.

    1. Hey Bernd, that’s a fair point. I suppose it’s not really possible for anyone (including researchers) to be working in the ecosystem and claim they’re neutral. We all have our biases.

      I think where you read “completely neutral” just substitute “not a publisher, funder, or research institution”, which is more the point I’m trying to make and is important to Publons.

      The problem with the IF is the way everyone in the ecosystem relies on it, and the underlying cause is a lack of other ways to understand researchers. We (Publons) will do what we can to provide better information about researchers but our focus right now is on building tools that improve the peer review process for everyone in the ecosystem.

      Email me anytime if you want to discuss further.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.