If you’ve ever submitted a paper, you know that many journals ask authors to suggest experts who can peer review your work. That’s understandable; after all, as science becomes more and more specialized, it becomes harder to find reviewers knowledgeable in smaller niches.
Human nature being what it is, however, it would seem natural for authors to recommend reviewers who are a bit more likely to recommend acceptance. Such author-suggested reviewers are just one source of the two or three experts who vet a particular paper, and are required to disclose any conflicts of interest that might bias their recommendations.
Still, editors have justifiable concerns that using too many of them may be subtly increasing their acceptance rate. That’s why we’re interested in such issues at Retraction Watch. Increasing a journal’s acceptance rate, of course, could mean increasing the number of papers at the lower end of the quality spectrum, and perhaps up the rate of retractions.
The Journal of Pediatrics recently peered into its own peer review system, to paraphrase their headline, to try to figure out if author-suggested reviewers were having a particular effect on their process. They started with other studies of the phenonomenon (we replaced footnotes with links):
Studies have found that editor-suggested reviewers (ESRs) were less likely than author-suggested reviewers (ASRs) to recommend acceptance. Rivara et al found, in The Archives of Pediatrics and Adolescent Medicine, that 75% of ESRs recommended accept or revise, and 86% of ASRs recommended accept or revise. In a sample of reviewers from 10 journals, Schroter et al found that ASRs tended to review manuscripts more favorably than ESRs.
When they looked at 300 papers submitted in 2007 to The Journal of Pediatrics, the authors, one of whom is the journal’s editor in chief, found:
When evaluating manuscripts, 65.3% of ASRs recommended acceptance (109/167), whereas 54.2% of ESRs recommended acceptance (96/177; P = .04). Editors agreed with 49.5% (54/109) of the accept recommendations of ASRs (P<.0001) and with 55.2% (53/96) of ESRs (P <.0001).
In other words, as the authors note:
…ASRs are more likely to recommend acceptance of a submitted manuscript, although editors are less likely than both ASRs and ESRs to recommend acceptance of a manuscript.
The authors conclude:
Although our ﬁndings could be caused by a variety of factors, the results support the peer review motto, ‘‘Reviewers advise; editors decide.’’ Because editors serve as the gatekeepers to medical literature, the ﬁnal responsibility resides with the editors to ensure impartiality in the peer-review process.
I agree with the authors’ conclusions that, although there do appear to be some slight differences between reviewers suggested by authors and those chosen by editors, these differences don’t suggest that author-suggested reviewers should not be used.
As Wager, who serves as chair of the Committee on Publication Ethics, pointed out, testing peer review decisions is difficult because there’s nothing to test them against. Recommending “acceptance of more articles might actually be the ‘right’ decision,” depending how you define “right.”
In the end, the findings suggest that as long as editors are living by “reviewers advise, editors decide,” any small difference in how ASRs review papers is unlikely to have much of an effect. But we applaud The Journal of Pediatrics, and any others that take the time to study this issue, for keeping tabs on it.