Although previous research has suggested peer reviewers are not influenced by knowing the authors’ identity and affiliation, a new Research Letter published today in JAMA suggests otherwise. In “Single-blind vs Double-blind Peer Review in the Setting of Author Prestige,” Kanu Okike at Kaiser Moanalua Medical Center in Hawaii and his colleagues created a fake manuscript submitted to Clinical Orthopaedics and Related Research (CORR), which described a prospective study about communication and safety during surgery, and included five “subtle errors.” Sixty-two experts reviewed the paper under the typical “single-blind” system, where they are told the authors’ identities and affiliations, but remain anonymous to the authors. Fifty-seven reviewers vetted the same paper under the “double-blind” system, in which they did not know who co-authored the research. We spoke with Okike about some of his unexpected results.
Retraction Watch: You found that reviewers were more likely to accept papers when they could see they were written by well-known scientists at prestigious institutions. But the difference was relatively small. Did anything about this surprise you?
Kanu Okike: Our findings are best viewed in light of the existing literature. While critics have expressed concern that single-blind review could be biased, several randomized controlled studies conducted previously did not find this to be the case [see for example these 1998 papers in JAMA (1, 2, 3)]. So do our results represent a “surprise?” That is hard to say, but our results do certainly run counter to the existing literature on the topic. Most medical journals practice single-blind review under the assumption that it is not biased. From this standpoint, our finding that there was a 19 percentage point difference in the recommended acceptance rates is notable.
RW: You found other differences between single-blind and double-blind review (higher ratings for methods, results, and other categories when not blinded to authors’ identities), but again, the differences seemed slight.
KO: As noted previously, the prevailing assumption in the field is that single-blind review is equivalent to double-blind review, with reviewers being able to judge a manuscript on its merits alone without being swayed by the identities of the authors. From this standpoint, the fact that the single-blind reviewers awarded higher grades for the study’s methods, results, overall, etc — in spite of the two manuscripts being otherwise identical — is notable. It is true that the effect sizes were relatively small – around 1 point on a 10 point scale – but they were all in the same direction (higher for the single-blind reviewers) and statistically significant in 7 out of the 8 categories examined.
RW: Reviewers found the same number of errors in reviewed papers, whether or not they were blinded to the authors’ identities. Did that surprise you?
KO: The most notable element of the error analysis was the small number of errors detected by both groups (less than one out of 5 on average). The significance of this finding is unclear. The intentionally introduced errors were designed to be subtle so as not to take away from the overall perceived quality of the manuscript, and it is possible that reviewers were focusing on larger issues as opposed to typos. However, this is certainly a finding worthy of further inquiry.
Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our daily digest. Click here to review our Comments Policy. For a sneak peek at what we’re working on, click here.