Should peer review be open, and rely less on author-picked reviewers? Study says…

BMJ openAfter reviewing hundreds of peer review reports from three journals, authors representing publishers BioMed Central and Springer suggest there may be some benefits to using “open” peer review — where both authors and reviewers reveal their identity — and not relying on reviewers hand-picked by the authors themselves.

But the conclusions are nuanced — they found that reviewers recommended by authors do just as good a job as other reviewers, but are more likely to tell the journal to publish the paper. In a journal that always uses open reviews — BMC Infectious Diseases — reviews are “of higher quality” than at a journal where authors are blinded to reviewers, but when one journal made a switch from a blinded to an open system, the quality didn’t improve.

Here’s what the authors conclude in the abstract of the paper, published today in BMJ Open:

Reviewers suggested by authors provide reports of comparable quality to non-author suggested reviewers, but are significantly more likely to recommend acceptance. Open peer review reports for BMC Infectious Diseases were of higher quality than single-blind reports for BMC Microbiology. There was no difference in quality of peer review in the Journal of Inflammation under open peer review compared with single blind.

This isn’t the first study to examine the effects of different types of review, the authors note:

It has been found that the reviewers chosen by editors are statistically-significantly more critical than those suggested by the authors. Intriguingly, the majority of the studies have been conducted on medical journals and there are no studies on biology journals.

During this study, which included papers about both medicine and biology, the authors compared two BMC journals that were similar in many ways — size, subjects, impact factor, rejection rates — but one used open review, and the other, single-blind. They also reviewed reviewer reports from the Journal of Inflammation, which changed from open peer review to a single-blind model in 2010, comparing reports from before and after the policy change. All told, they reviewed 800 reviewer reports from the three journals.

To compare reviewers between those suggested by authors and editors, the researchers found cases where one paper was reviewed by both. For instance, with the BMC journals:

In each of BMC Microbiology and BMC Infectious Diseases, we identified 100 manuscripts that had two reviewers each, one suggested by the authors and one by another party (BioMed Central’s PubMed search tool comparing the abstract of the manuscript to abstracts in PubMed, another reviewer or editor).

And here’s how the authors rated the reviews:

Each peer review report was rated using an established Review Quality Instrument (RQI). Each report was rated separately and independently by two senior members of the editorial staff at BioMed Central. The peer review model and whether the reviewer was author suggested was unknown to the raters. However, the raters were not blinded to the reviewers’ identity.

The findings were decidedly nuanced, the authors note:

These results suggest that it may be advantageous to use open peer review but they do not undermine the validity of using the single-blind approach.

Study author Maria Kowalczuk, an editor at BMC, told us the paper addresses some long-standing concerns about removing reviewers’ anonymity:

Despite historical concerns about open peer review, we have found that it is just as good, if not slightly better, than single blind peer review.  Interestingly, peer reviewers suggested by authors produce good quality reports. Although they do tend to recommend acceptance more often than editor chosen peer reviewers, this does not affect editorial decision making.

The authors have also launched a new journal to further investigate how to improve the peer review process:

We encourage further research into peer review and publication ethics, and in order to facilitate research communication in this field we have launched a new journal Research Integrity and Peer Review.

The new journal is co-edited by Kowalczuk and Elizabeth Wager, a member of the board of directors of the Center for Scientific Integrity, our parent non-profit organization.

For our part, we find it reassuring to know that author-suggested reviewers (still the dominant form of peer review) are as likely to provide quality feedback as editor-suggested reviewers. But the fact they are more likely to recommend accepting papers is somewhat troubling, given the high number of problematic papers that pass peer review. Indeed, the authors don’t include the quality of the reviewed studies themselves in their analysis, which they acknowledge in the “Limitations of this study” section of their manuscript.

What is not covered in this study is the actual quality of the study itself, which they acknowledge in limitations.  They also only had “moderate” agreement between their raters of the reviews.  Since they used an average, this could have skewed results in one direction or the other.

In another limitation, the authors note they only had “moderate” agreement between their raters of reviews, which could have some influence on their final results.

Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, and sign up on our homepage for an email every time there’s a new post. Click here to review our Comments Policy.

 

16 thoughts on “Should peer review be open, and rely less on author-picked reviewers? Study says…”

  1. “author-suggested reviewers (still the dominant form of peer review) are as likely to provide quality feedback as editor-suggested reviewers. But the fact they are more likely to recommend accepting papers is somewhat troubling, given the high number of problematic papers that pass peer review. Indeed, the authors don’t include the quality of the reviewed studies themselves in their analysis, which they acknowledge in the “Limitations of this study” section of their manuscript.”
    From the above, it seems pretty clear to me that author selected peer reviewers is a pretty ridiculous forma–just a good ole your scratch my back and I’ll….Also, I think the quality rating of the paper by the reviewers should be mandatory. In fact, author-selected versus editor-selected reviewers should have been used to review the same papers to compare the “criticality severity of the reviews”. Open knowledge of the reviewers should not be allowed–e.g. you give me a bad review, guess what I am going to do or vice versa=good–>good.
    One question as a old timer, how many years has this article review format been predominant e.g before 2000?

  2. The paper is based on the misunderstanding of the peer review process including the respective rolls of the editor-in-chief, reviewers and the authors. Ideally, the editors have a list of reviewers whose cv are readily available and are used by the editor. The authors would benefit to look at our book on peer review ( peer Review and Scientific Assessment) by Moghissi et al. The proposed reviewers by the authors should be added to list of each journal after an evaluation by the editor/editorial advisory Board. Based on our extensive experience as journal editor, organizing a large number of peer reviews for government agencies at virtually all levels, our conclusion sis that there is nothing wrong with the review process but a lot is wrong how peer review is performed. The study quoted above describes deficiencies ofhow some journals do their peer review rather than the peer review process.

    1. While various models may work well for different journals, from my experience working as a handling editor, editors are in constant need of expanding their databases of reviewers as they receive submissions on new topics from new areas of science. Suggestions for potential reviewers from the authors can be useful as the authors know their field. Of course we agree that the editor should carefully check all the suggested reviewers. Our analysis shows that editors can be confident in the quality of reports provided by author-suggested reviewers, but they need to take it into account that these reviewers tend to recommend acceptance of the manuscript more often than non-author suggested reviewers.

  3. Aren’t there two separate factors that should be looked at independently? (1) Author-selected versus editor-selected reviewers and (2) Open review system where the identity of both sides are known to all and closed review system where only the identity of the author is known. Of course, there is the alternative where the identity of both sides is not known to either (but could be inferred, if one wanted to).

    Seems to me if they should be doing a study that looks as these two factors independently. Or did they do that and I just missed it?

    To me, (1) seems unimportant. Even for journals that used author-selected reviewers, an editor can throw in one of his/her choices. There is no guarantee that the editor has to use all of the authors’ choices.

    Open and closed reviewing system is different. And unfortunately, it isn’t just “I write a bad review for you and you write a bad one for me”. The author can choose to “get back” at a reviewer in other ways from decisions on hiring/promotion, to bad-mouthing the reviewer to others, to even physical violence. It’d be a bit silly to place reviewers in witness relocation programs!

  4. Thanks for posting this story and describing the accompanying research.

    True, it’s somewhat different, but doubters had it that Wikipedia would be full of errors and could not handle specialized topics. This proved wrong. Wikipedia’s transparent, article-level editorial threads — in essence an ongoing peer review process — represent an improvement on today’s peer review model. Despite imperfections, new articles are regularly launched and curated as fast as many print publications.

    Whether the reviewers are hand-picked by the authors or by others (editors), they are still hand-picked, reflecting potential insider bias toward inside-the-box thinking. This is one legacy of an insular academic system, one that open publications only partially escape.

    Disclosing the authors is a different creature from identifying reviewers in the single-blind format discussed in the RW post. A known bias is that well-known, highly-published authors receive preferential treatment by editors. There is evidence that single-blind review can result in gender bias. It may also be one reason that some authors have accelerating publication counts. As they publish more, reviewers and editors feel that publishing their work is “safe,” even though there is considerable rehash and redundancy that would otherwise not be tolerated.

    A spot check of the Review History in BMJ Open for publications is highly illustrative. There are papers with only two reviews, and it’s not uncommon for language problesm to call into question whether authors and reviewers fully understood each other. In other words, reviews, which are unrated in the academic publishing world, vary greatly in quality — as was noted in the paper mentioned by RA.

    Greater transparency (sometimes “open” published, sometimes not related to that) would address certain sources of error or fraud. But there are many limits.

    -A paper’s references behind pay walls cannot be checked for misattributed comments or incorrect interpretations
    -If the data is not published at the same time as the review, and in canonical formats readily accessible, it is impossible to eyeball data issues.
    -Even when data is available, it is often the case that reviewers have little capability to attempt to rerun statistical analysis or uncover whether proper statistical methods were applied given data limitations. The solutions to this are likely complex, but will entail greater standardization of data analysis and broad acceptance of eScience workflow.
    -Open publications should have improved access to third party content, such as references to other papers and publications. Hyperlinks to existing work (at the paragraph level, not just the DOI) should be detailed enough to allow reviewers to understand connections between proposed between current and past work — a key aspect to many publications. Without this, the quality of reviews, as well as the papers themselves, is suspect.
    -While it’s a somewhat different discussion, open publishing should have an ongoing, rather than a one-time peer review. Critiques of papers accepted for publication should not be the end of the review process.

    Getting published should be less important. Its obverse, retractions due to error or simply weak scholarship (not fraud, a different matter), should not be career-ending events, but rather part of the rough and tumble of advancing knowledge.

  5. I see traditional peer review work really well in some journals where editors serve science and are highly disciplined. Even so, it is never perfect because human understanding and knowledge are limited. Have two peers and an editor complete “peer” review, you’ll have one outcome. Have 100 peers and 10 editors complete peer review, and you’ll have a better outcome, most likely closer to perfection. But then, in the later option, you’d have only a few dozen papers published from around the world per year.

    The publishers have lowered the bar and set the precedent all wrong. They have said, through the current system, that perfection is not attainable, but some quality control is necessary to justify a sellable product, i.e., scientists intellectual achievement bundled up in a paper. Free editorial work. Free peer reviewers. Free intellectual hand-over as copyright. All they needed to do was set up the platform and the automated systems that selected peers to reduce the work force and maximize the profits.

    It’s not about science. It’s about share-holders. When will scientists get it?

    The publishers have manipulated the system and milked the scientific base so well over centuries, and continue to do so with the OA movement, because they know that even if 100 suckers protests, there will be another million dying to replace them. The system is totally messed from the roots up, and from the inside out. This talk about all these optional and alternative systems will never work. The whole system must collapse in its entirety first. And a totally new base built on knowledge, and not on exploitation, must replace it. Science run by scientists. Not by MBAs and marketing managers.

    Mark Underwood, although your ideas are noble, who pays for this never-ending peer review platform? Who pays for each round of files, processing, DOIs, etc? Idealism tends to forget the basics. What you are sadly suggesting is the next exploratory model. The publishers don’t even need teams to plan the next move because scientists are planning it for them. They sit back and watch the circus unfold while reaping the profits.

    In my opinion, it doesn’t really matter if reviewers are author-picked or editor-picked, because the ultimate goal is corrupted. It has lost its noble sense of pride for science. It serves only an economic finale. The sooner scientists realize this, the sooner science can collapse and refresh. We are simply torturing the inevitable end and prolonging the pain.

    1. Let’s not lay the blame on just publishers. It’s not just publishers that are filled with MBAs and marketing managers, but universities and research institutes as well. Once they realized that number of papers was an easy value to measure and compare against than something like “quality of the research”, it became a way to hire and promote staff. It was only a matter of time before researchers realized this and it was merely a “beat them or join them” … and it’s much harder to beat them.

      As for the noble sense in research…in a way, every profession has the potential of being corrupted. And some have been more so than research! Medicine and law, for example — both are noble, but once they realized raising the number of patients / clients per hour could increase cash flow, it became very easy for such professionals overshadowed those that honestly wanted to heal people or interpret the law to help society. Surely such corruption was what led to the onslaught of lawyer jokes, and not lawyers themselves! 🙂

      *sigh*

  6. “author-suggested reviewers (still the dominant form of peer review):” this is a rather large generalisation. That is certainly not true across the full range of disciplines or journals.

  7. I want the research and publishing process to be as open as possible.

    For example, there should never be retractions – but notices should be published, pointing out why the editors think a paper should not be trusted anymore, and more critically, how this paper passed peer and editorial review.

    Then, everybody should be free to pick his or her referees, but their names, their opinions and their affiliations should be available under the same terms as the research article itself. Writing a fair, competent and thorough review on a manuscript could be even a career-promoting thing for a young researcher.

    No statue has ever erected in memory of a critic, but here we could honor and incentivize the honest work of a peer reviewer who is otherwise left unpaid.

    As the economists like to say… there are neither bad men nor bad behaviour, only bad incentives.

  8. The discussion is too broad to reach practical solutions. To manage the issue, it has to be divided into two clear groups: a) the past, already published literature, and how to correct the ills of the weaknesses associated with traditional peer review until this point; b) the future literature and how to avoid the errors that the past has shown us. Open peer review does not eliminate bias. In fact, it could increase it, especially given the power of social media and “like” vs “unlike” parameters, so a totally open and unregulated model could decrease quality and not increase it. Everyone is clamoring for more openness as if openness were synonymous with transparency, but this is not necessarily the case. Because in such a system, you would not be able to voice what is a conflict of interest and what is not, because you would be making it a free-for-all system: free to add, change, critique, or remove at will. I suspect we are going to go through alot more experimental models before we hit the right one. All the while, science – and the science literature – becomes more fragmented, and not diversified.

  9. Are author-suggested reviewers more likely to suggest publication not because of cronyism, but because of a kind of positive reinforcement-type bias? They’re probably in the same sub (or sub-sub!) field as the author, and like to see work in their area published in good journals, and are just generally more positively inclined towards the work than someone picked by the editor. Editor-selected reviewers have the right skills to review the paper (hence the similar quality of reviewers), but may simply be less interested in the paper and therefore less likely to recommend it is accepted…

    1. I have to wonder how many of the “author suggested” reviewers were / are also on the editors’ lists of “possible reviewers”? As the field of potential reviewers gets smaller, surely there must be more and more overlap between these two lists?
      The study only looked at the reviews after the fact, ie the reviewers that were actually chosen and provided a review . It gives no details on reviewers that the editors perhaps wanted regardless of whether they were suggested by the authors or not.

  10. I am wary of this trend of Biomedicine scholars trying to pave a path for peer review norms.
    Firstly, in fields like Business/Management/Economics/Social, Human and Behavioral sciences, double-blind peer-review has always been the norm. The reviewers are neither nominated nor recommended by authors. This is wholly done by the editorial boards. Just because Biomedicine is only opening its eyes now, it doesn’t mean they get to set the rules. I am sorry but they have to learn some lessons from other fields before thinking of carpet policies in science. This is the same with COPE rules, where the dominant thinking is only driven by only a few subject areas.

    1. It’s good to see the data about reviewer standards at Wiley. Of course, none of those statistics are quality metrics.

      Asserting that editorial boards make reviewer selections does not necessarily make reviews better, fairer or more objective. Editors pick who they know. Who they know is who is published — sometimes who the editors themselves have published. Especially in the potentially insular, specialized world of much academic publishing, this is a small, even self-congratulatory circle that can easily tend to reinforce current-paradigm thinking.

      Reviews, if themselves openly published, even if anonymous, would reveal the messy mix of superlative scholarship and uninspired repetition that goes on behind the veil.

      Maybe reviewers aren’t wearing the emperor’s new clothes in an ivory tower. Show us.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.