If you’ve ever submitted a paper, you know that many journals ask authors to suggest experts who can peer review your work. That’s understandable; after all, as science becomes more and more specialized, it becomes harder to find reviewers knowledgeable in smaller niches.
Human nature being what it is, however, it would seem natural for authors to recommend reviewers who are a bit more likely to recommend acceptance. Such author-suggested reviewers are just one source of the two or three experts who vet a particular paper, and are required to disclose any conflicts of interest that might bias their recommendations.
Still, editors have justifiable concerns that using too many of them may be subtly increasing their acceptance rate. That’s why we’re interested in such issues at Retraction Watch. Increasing a journal’s acceptance rate, of course, could mean increasing the number of papers at the lower end of the quality spectrum, and perhaps up the rate of retractions.
The Journal of Pediatrics recently peered into its own peer review system, to paraphrase their headline, to try to figure out if author-suggested reviewers were having a particular effect on their process. They started with other studies of the phenonomenon (we replaced footnotes with links):
Studies have found that editor-suggested reviewers (ESRs) were less likely than author-suggested reviewers (ASRs) to recommend acceptance. Rivara et al found, in The Archives of Pediatrics and Adolescent Medicine, that 75% of ESRs recommended accept or revise, and 86% of ASRs recommended accept or revise. In a sample of reviewers from 10 journals, Schroter et al found that ASRs tended to review manuscripts more favorably than ESRs.
When they looked at 300 papers submitted in 2007 to The Journal of Pediatrics, the authors, one of whom is the journal’s editor in chief, found:
When evaluating manuscripts, 65.3% of ASRs recommended acceptance (109/167), whereas 54.2% of ESRs recommended acceptance (96/177; P = .04). Editors agreed with 49.5% (54/109) of the accept recommendations of ASRs (P<.0001) and with 55.2% (53/96) of ESRs (P <.0001).
In other words, as the authors note:
…ASRs are more likely to recommend acceptance of a submitted manuscript, although editors are less likely than both ASRs and ESRs to recommend acceptance of a manuscript.
The authors conclude:
Although our findings could be caused by a variety of factors, the results support the peer review motto, ‘‘Reviewers advise; editors decide.’’ Because editors serve as the gatekeepers to medical literature, the final responsibility resides with the editors to ensure impartiality in the peer-review process.
So, should editors be concerned about the effects ASRs are having on the peer-review process? We asked Liz Wager, who co-authored one of the studies of this phenomenon, for her take:
I agree with the authors’ conclusions that, although there do appear to be some slight differences between reviewers suggested by authors and those chosen by editors, these differences don’t suggest that author-suggested reviewers should not be used.
As Wager, who serves as chair of the Committee on Publication Ethics, pointed out, testing peer review decisions is difficult because there’s nothing to test them against. Recommending “acceptance of more articles might actually be the ‘right’ decision,” depending how you define “right.”
In the end, the findings suggest that as long as editors are living by “reviewers advise, editors decide,” any small difference in how ASRs review papers is unlikely to have much of an effect. But we applaud The Journal of Pediatrics, and any others that take the time to study this issue, for keeping tabs on it.
What I have noticed is that ESRs are sometimes so far away from the field of a given manuscript, they do not appreciate its contribution. Unfortunately, it is very easy to pick an ESR on many occasions because of the naivety of their comments.
I would suggest, and I have seen many journals do this, that authors be permitted only to request reviewers who are NOT appropriate to review the paper. Seems to make much more sense to me.
The flipside of this coin is that journals have a tendency to go back time and time again to people who have turned around reviews in a timely manner before, or who are members of the editorial board (sometimes ONLY to members of the editorial board) – this can be understandably frustrating to an author, as it can be obvious that the reviewer is not an expert. I would also suggest, of most interest to Retraction Watch, that many of the reviewers who are not directly in the appropriate sub-field of the authors are more likely to miss critical issues such as plagiarism, figures being duplicated, etc etc, as their familiarity with the details is less.
I agree with previous commenters that ASRs are often likely to be more familiar with the work and its context than ESRs, and I think this is likely the impetus for the practice of ASRs in the first place. Editors are obviously supposed to have a knowledge of their field, but it is often difficult to keep up with all facets of research. In my experience, editors often choose a mix of ASRs and ESRs, which is probably a good practice to minimize bias and maximize fairness.
That being said, I have had many discussions with colleagues to the effect of “how could reviewers have missed this?!” when examining a publication. In this regard, it seems like the ASR/ESR issue may only be one small part of the problem.
I publish clinical research manuscripts out of a large community teaching hospital. I find it a pain to come up with (usually) 3 ASR’s as we most often do have not 3 easily identified reviewers whom we know who publish on our topic(s). So it tends to be a shot in the dark with us.
I wouldn’t mind having ESR’s who are well-versed in the general methods of clinical trial methodology and then having to respond to questions that indicate they are not experts in the particular topic we are investigating.
One thing that bothers me is that reviewers’ comments are anonymous. Supposedly this protects the “sanctity” of the peer review system. To me, if I make a comment in good faith I should stand by it and be willing to listen to people if they have a problem with it. But supposedly the reviewers need to be “protected.”
The peer review system is not perfect by a long shot but I’m not sure what should replace it.
As a former editor of a top tier journal, I can tell you that there’s another side to this. We too solicited ASRs, which were used in the mix for any submitted manuscript. I can assure you that there were plenty of very negative critiques from the ASRs. Then, the author would harass us about why we didn’t use their ASRs, when we had. I guess the lesson is, be careful what (who) you ask for.
Also I wonder if the request for suggested reviewer names is a bit unfair to younger researchers. They are less likely to know who to recommend, and they may make recommendations that don’t really make sense.
I think transparency could easily be achieved if reviewer comments were posted online with the reviewers’ names. All too often we find publications in Nature and Science and are left scratching our heads wondering how did ‘this’ get published? Look at the corresponding author and we can guess that big names get to publish in big journals though the work itself is less than profound. By publishing the reviewers’ names and comments would be a big step towards insuring journals published work based on merit and not on reputation.
An anecdote is not data, but I was once third reviewer on a paper that already had two “very good, publish in present form” type of reports. I was taken aback when I found it to be riddled with obvious errors. On raising the discrepancy with the editor I was less than re-assured by “Sometimes these things happen”. I can only resume that the two flawless reports came from stooges selected by the author.