After reviewing hundreds of peer review reports from three journals, authors representing publishers BioMed Central and Springer suggest there may be some benefits to using “open” peer review — where both authors and reviewers reveal their identity — and not relying on reviewers hand-picked by the authors themselves.
But the conclusions are nuanced — they found that reviewers recommended by authors do just as good a job as other reviewers, but are more likely to tell the journal to publish the paper. In a journal that always uses open reviews — BMC Infectious Diseases — reviews are “of higher quality” than at a journal where authors are blinded to reviewers, but when one journal made a switch from a blinded to an open system, the quality didn’t improve.
Here’s what the authors conclude in the abstract of the paper, published today in BMJ Open:
Reviewers suggested by authors provide reports of comparable quality to non-author suggested reviewers, but are significantly more likely to recommend acceptance. Open peer review reports for BMC Infectious Diseases were of higher quality than single-blind reports for BMC Microbiology. There was no difference in quality of peer review in the Journal of Inflammation under open peer review compared with single blind.
This isn’t the first study to examine the effects of different types of review, the authors note:
It has been found that the reviewers chosen by editors are statistically-significantly more critical than those suggested by the authors. Intriguingly, the majority of the studies have been conducted on medical journals and there are no studies on biology journals.
During this study, which included papers about both medicine and biology, the authors compared two BMC journals that were similar in many ways — size, subjects, impact factor, rejection rates — but one used open review, and the other, single-blind. They also reviewed reviewer reports from the Journal of Inflammation, which changed from open peer review to a single-blind model in 2010, comparing reports from before and after the policy change. All told, they reviewed 800 reviewer reports from the three journals.
To compare reviewers between those suggested by authors and editors, the researchers found cases where one paper was reviewed by both. For instance, with the BMC journals:
In each of BMC Microbiology and BMC Infectious Diseases, we identified 100 manuscripts that had two reviewers each, one suggested by the authors and one by another party (BioMed Central’s PubMed search tool comparing the abstract of the manuscript to abstracts in PubMed, another reviewer or editor).
And here’s how the authors rated the reviews:
Each peer review report was rated using an established Review Quality Instrument (RQI). Each report was rated separately and independently by two senior members of the editorial staff at BioMed Central. The peer review model and whether the reviewer was author suggested was unknown to the raters. However, the raters were not blinded to the reviewers’ identity.
The findings were decidedly nuanced, the authors note:
These results suggest that it may be advantageous to use open peer review but they do not undermine the validity of using the single-blind approach.
Study author Maria Kowalczuk, an editor at BMC, told us the paper addresses some long-standing concerns about removing reviewers’ anonymity:
Despite historical concerns about open peer review, we have found that it is just as good, if not slightly better, than single blind peer review. Interestingly, peer reviewers suggested by authors produce good quality reports. Although they do tend to recommend acceptance more often than editor chosen peer reviewers, this does not affect editorial decision making.
The authors have also launched a new journal to further investigate how to improve the peer review process:
We encourage further research into peer review and publication ethics, and in order to facilitate research communication in this field we have launched a new journal Research Integrity and Peer Review.
The new journal is co-edited by Kowalczuk and Elizabeth Wager, a member of the board of directors of the Center for Scientific Integrity, our parent non-profit organization.
For our part, we find it reassuring to know that author-suggested reviewers
(still the dominant form of peer review) are as likely to provide quality feedback as editor-suggested reviewers. But the fact they are more likely to recommend accepting papers is somewhat troubling, given the high number of problematic papers that pass peer review. Indeed, the authors don’t include the quality of the reviewed studies themselves in their analysis, which they acknowledge in the “Limitations of this study” section of their manuscript.
What is not covered in this study is the actual quality of the study itself, which they acknowledge in limitations. They also only had “moderate” agreement between their raters of the reviews. Since they used an average, this could have skewed results in one direction or the other.
In another limitation, the authors note they only had “moderate” agreement between their raters of reviews, which could have some influence on their final results.
Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, and sign up on our homepage for an email every time there’s a new post. Click here to review our Comments Policy.