Although previous research has suggested peer reviewers are not influenced by knowing the authors’ identity and affiliation, a new Research Letter published today in JAMA suggests otherwise. In “Single-blind vs Double-blind Peer Review in the Setting of Author Prestige,” Kanu Okike at Kaiser Moanalua Medical Center in Hawaii and his colleagues created a fake manuscript submitted to Clinical Orthopaedics and Related Research (CORR), which described a prospective study about communication and safety during surgery, and included five “subtle errors.” Sixty-two experts reviewed the paper under the typical “single-blind” system, where they are told the authors’ identities and affiliations, but remain anonymous to the authors. Fifty-seven reviewers vetted the same paper under the “double-blind” system, in which they did not know who co-authored the research. We spoke with Okike about some of his unexpected results.
Retraction Watch: You found that reviewers were more likely to accept papers when they could see they were written by well-known scientists at prestigious institutions. But the difference was relatively small. Did anything about this surprise you?
Kanu Okike: Our findings are best viewed in light of the existing literature. While critics have expressed concern that single-blind review could be biased, several randomized controlled studies conducted previously did not find this to be the case [see for example these 1998 papers in JAMA (1, 2, 3)]. So do our results represent a “surprise?” That is hard to say, but our results do certainly run counter to the existing literature on the topic. Most medical journals practice single-blind review under the assumption that it is not biased. From this standpoint, our finding that there was a 19 percentage point difference in the recommended acceptance rates is notable.
RW: You found other differences between single-blind and double-blind review (higher ratings for methods, results, and other categories when not blinded to authors’ identities), but again, the differences seemed slight.
KO: As noted previously, the prevailing assumption in the field is that single-blind review is equivalent to double-blind review, with reviewers being able to judge a manuscript on its merits alone without being swayed by the identities of the authors. From this standpoint, the fact that the single-blind reviewers awarded higher grades for the study’s methods, results, overall, etc — in spite of the two manuscripts being otherwise identical — is notable. It is true that the effect sizes were relatively small – around 1 point on a 10 point scale – but they were all in the same direction (higher for the single-blind reviewers) and statistically significant in 7 out of the 8 categories examined.
RW: Reviewers found the same number of errors in reviewed papers, whether or not they were blinded to the authors’ identities. Did that surprise you?
KO: The most notable element of the error analysis was the small number of errors detected by both groups (less than one out of 5 on average). The significance of this finding is unclear. The intentionally introduced errors were designed to be subtle so as not to take away from the overall perceived quality of the manuscript, and it is possible that reviewers were focusing on larger issues as opposed to typos. However, this is certainly a finding worthy of further inquiry.
Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our daily digest. Click here to review our Comments Policy. For a sneak peek at what we’re working on, click here.
The observation that double blind reviewer tend to give more negative reviews has been made in the past (e.g. Blank, 1991, although these effects are inconsistent: other studies have found no effects). Prestige is a red herring here, as the authors didn’t try to vary it (e.g. by giving names and affiliations from somewhere less well known). The lack of differences in error spotting also replicates what was seen by Godlee et al. (cited above, but strangely not in the paper).
I blogged about this for Methods in Ecology & Evolution last week, which is why I’m up on the literature at the moment.
The title of this paper is not unlike “the world may be round”–what is going on that double-blind refereeing hasn’t always been automatic in all scientific journals–as it is in, say, philosophy journals?The stories told on your estimable e-newsletter have long been shocking to me, but even more shocking is that obvious solutions have not been implemented long since.
Exactly. I once had a discussion with an economist in which he was shocked to hear that in the natural sciences single-blind review is most often used. He said double-blind is the standard in economics and he can’t even imagine an economics journal doing single-blind. It’s a sad day when economists have to teach scientists how to do their work.
One small potential methodological problem here–Unless the paper describes entirely new research, a quick copy, paste and search of some key words from the double-blind submission will pull up conference papers and related publications. At that point, it is pretty easy to figure out the identity of the anonymous authors. This is particularly true in small fields like mine.
I’d like to see the actual paper and the author affiliations and identities used. This is important because the result could have been swayed in the opposite direction if different authors were used on the single-blind paper.
Say for example we’re comparing single-blind with a very well known and respected set of authors at a prestigious institution (where there’s no doubt they have the resources and ability to pull of the described research), versus double-blind. It would not be surprising to learn that the single-blind paper was more favorably reviewed.
But, if the single-blind paper was from an unknown or not very highly regarded group at podunk college hicksville (thus raising questions abotu whether they even have the resources to do the work at all), it would not be suprising to see such a paper reviewed worse than the double-blind paper.
In other words, without knowing the actual authors and institution (and perceived prestige, resources etc.) of the “test” paper, it is impossible to tell if the result is a general property of single vs. double blind, or it was a simple artifact of the particular level at which the single-blind paper was pitched.
RW asked “You found that reviewers were more likely to accept papers when they could see they were written by well-known scientists at prestigious institutions.” But was “prestige” (of the author or of their institute) a factor, or simply the identity of the authors, including their affiliations? If “prestige” was in fact a factor, then how was “prestige” of an “author” or of an “affiliation” assessed, or quantified?
Is double-blind review common for the journal that lent its name to this experiment? If not, a difference in outcome could be caused by the procedure being unusual, not by the nature of the procedure per se.
I am a senior researcher and have recently received two quite condescending sets of double-blind reviews from junior researchers who assume that I don’t know what I am talking about. For example, one reviewer suggested that we do a lot of other analyses with the same data set, but actually we had already published many of those analyses elsewhere. Or they lecture us parroting some trite concepts as though we were not familiar with the concepts, instead of realizing that yes, we know about these concepts but are critiquing some aspect of them.