In a recent editorial, the Journal of Neurochemistry declared it would no longer accept author-suggested reviewers. While other journals have done the same in order to prevent fake reviews, the Journal of Neurochemistry is basing its decision on a different logic. We spoke with editor Jörg Schulz about why he believes relying on reviewers picked by editors helps reduce bias in the peer-review process.
Retraction Watch: What prompted you to compare the outcomes of papers reviewed by experts suggested by authors versus experts selected by editors, or experts the authors “opposed?”
Jörg Schulz: Journal of Neurochemistry attempts to continuously advance good research practice and ethical standards. Our aim is to publish clear, transparent information that will allow any reader to understand and reproduce the reported research as needed. But it is a challenging task for our board of editors and reviewers to assess the manuscripts submitted to the journal to ensure these standards are being met. And increasing evidence in the literature (e.g. Kowalczuk et al. 2015; Helmer et al. 2017; Kuehn 2017; Lee et al. 2012; O’Connor 2012; Travis & Collins 1991) suggests that peer reviewers may not be free of bias — even subconscious — in assessing the works. We wanted to analyze if the choice of reviewers had an influence on the outcome of a manuscript, with the goal of ensuring fairness in our peer review process.
RW: How often does the journal use reviewers the authors “opposed,” and in what circumstances?
JS: “Opposed” reviewers are used only rarely in exceptional cases where either not enough suitable other reviewers agree to review a paper, or when it is a researcher with good standing in the field who would be able to provide an expert view on specific aspects of the work, or when other justified reasons are in place. The overall percentage of “opposed” reviewers is very low as you can see in our analysis. Journal of Neurochemistry follows authors’ wishes in exempting “opposed” reviewers wherever possible.
RW: You note that papers reviewed by at least one reviewer suggested by an author were more likely to be accepted (52%) than papers reviewed only by experts selected by editors (32%). The concern is that authors recommend reviewers they think will rate their work positively, but how can the editors ensure they don’t select experts who work in a competing area, so might be overly (and perhaps unfairly) critical of the authors’ work? Or avoid selecting a reviewer without proper expertise?
JS: Our editors are advised to perform a careful check on the reviewers’ expertise before inviting them. This will for instance be based on personal experience, on previous reviewing activities logged in our database, or assessment of publications e.g. on PubMed that prove a reviewer’s expertise. Bear in mind that peer review is a voluntary service to the scientific community and that most editors and reviewers are intrinsically motivated to keep the quality of publications high – it is, after all, the work that they themselves rely on and will base their own research on, and not to forget, every researcher will want their own work to be assessed equally thoroughly! Furthermore, a number of reviewers actually decline invitations because they feel they have a conflict of interest, for instance if they had collaborated with one of the authors in the past, or if they feel they lack sufficient expertise on aspects of the work, or for another reason are not confident they would judge the work from a neutral perspective.
Journal of Neurochemistry has a pool of handling editors, who are assigned to a certain manuscript based on their specific scientific expertise. Thus, the editors know the area of research associated with the manuscript and strive to select the most suitable reviewers, who in turn are scored for the quality of their reports. Reviewers who perform below standards will stand lower chances of being invited in the future, and because there is a research community in each respective field, any reviewer has an incentive to build a good reputation. Undoubtedly, there will always be “black sheep,” but what we were worried more about were potential imbalances that might reside in having author-suggested reviewers in some cases but not in others.
RW: Did any of your findings surprise you?
JS: We anticipated that the outcome on manuscripts reviewed by author-suggested reviewers might turn out more positive because that is what the body of literature is increasingly suggestive of. However, we did not anticipate the effect size to be so substantial. It has been suggested that reviewers in the immediate scientific community, who might be author-suggested reviewers, are better suited because they have the same background, the same thinking, the same technical expertise, attend the same conferences, sometimes were influenced by the same leaders in the field, etc. But the quality of reviews, as evaluated by the handling editors, did not differ between author-suggested and non-author-suggested reviews. Therefore, the large effect size of the different acceptance rates let us to the conclusion to abandon author-suggested reviewers.
RW: How long does the submission/review process typically take at the journal, and do you expect the process to take longer if the editors have to identify and invite the reviewers?
JS: The average time from submission to first decision takes 28 days. The editors typically select and invite reviewers within 2-3 days after submission, and we do not observe differences depending on whether they invite author-suggested or non-suggested reviewers. We assume this is because the editors need to screen the reviewer names either way, and once they log in to invite reviewers, they will finish the task whether it takes 5 or 30 minutes.
RW: We’ve seen other journals and publishers opt out of author-suggested reviewers to avoid the problem of fake peer review, which has already led to hundreds of retractions — why did you choose not to mention this issue in the article about the decision to no longer allow authors to recommend reviewers?
JS: We have covered the problem of fake reviews in our current article as well as before. However, about two years ago our publisher, Wiley, provided us with a brief analysis of the potential presence of fake reviews in the journal, based on certain indicators — author-suggested reviewers from China or Taiwan (because that’s where all the evidence up at this point had pointed to) with non-institutional email addresses, recommending acceptance or minor revision, with “soft” reviews consisting of a few lines commending the paper followed by relatively minor copy-editing suggestions. The results of that analysis did not raise worries that Journal of Neurochemistry might suffer from an appreciable amount of fake reviews. Therefore, we chose to abandon the option to suggest reviewers for the reasons already that we discussed in our paper. The lower emphasis on fake reviews in the article is due to a different focus, rather than lack of awareness of the concerns.
Like Retraction Watch? You can make a tax-deductible contribution to support our growth, follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up for an email every time there’s a new post (look for the “follow” button at the lower right part of your screen), or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at email@example.com.