Male and female reviewers may rate papers the same way, regardless of whether the authors are male or female — but women are more likely to get the chance to review papers (and get their own papers reviewed) if other women are involved, according to studies of the review process at Functional Ecology.
In their comprehensive study of manuscripts put through the peer review process at the journal from January 2004 through June 2014, the authors found that the average review scores of manuscripts was roughly the same regardless of whether the reviewer — or editor — was male or female.
We’ve written quite a lot about the perks and pitfalls of the peer review system, but one thing we never really touched on was the risk that a reviewer might be … well, not to put too fine a point on it: a dope.
But Fiona Ingleby can speak to that. Ingleby, a postdoc in evolutionary genetics at the University of Sussex in the United Kingdom, co-wrote an article on gender differences in the transition from PhD-dom to postdoc land and submitted it to a journal for consideration. What she heard back was lamentably ironic — and grossly sexist. Continue reading It’s a man’s world — for one peer reviewer, at least
Grant reviewers at the U.S. National Institutes of Health are doing a pretty good job of spotting the best proposals and ranking them appropriately, according to a new study in Science out today.
Danielle Li at Harvard and Leila Agha at Boston University found that grant proposals that earn good scores lead to research that is more cited, more published, and published in high-impact journals. These findings were upheld even when they controlled for notoriously confounding factors, such as the applicant’s institutional quality, gender, history of funding and experience, and field.
Taking all those factors into consideration, grant scores that were 1 standard deviation lower (10.17 points, in the analysis) led to research that earned 15% fewer citations and 7% fewer papers, along with 19% fewer papers in top journals.
Li tells Retraction Watch that, while some scientists may not be surprised by these findings, previous research has suggested there isn’t much of a correlation between grant scores and outcomes:
How should scientists think about papers that have undergone what appears to be a cursory peer review? Perhaps the papers were reviewed in a day — or less — or simply green-lighted by an editor, without an outside look. That’s a question Dorothy Bishop, an Oxford University autism researcher, asked herself when she noticed some troubling trends in four autism journals.
Recently, Bishop sparked a firestorm when she wrote several blog posts arguing that these four autism journals had a serious problem. For instance, she found that Johnny Matson, then-editor of Research in Developmental Disabilities and Research in Autism Spectrum Disorders, had an unusually high rate of citing his own research – 55% of his citations are to his own papers, according to Bishop. Matson also published a lot in his own journals – 10% of the papers published in Research in Autism Spectrum Disorders since Matson took over in 2007 have been his. Matson’s prodigious self-citation in Research in Autism Spectrum Disorders was initially pointed out by autism researcher Michelle Dawson, as noted in Bishop’s original post.
Short peer reviews of a day or less were also common. Matson no longer edits the journals, both published by Elsevier.
Bishop noted similar findings at Developmental Neurorehabilitation and Journal of Developmental and Physical Disabilities, where the editors (and Matson) frequently published in each others’ journals, and they often had short peer reviews: The median time for Matson’s papers in Developmental Neurorehabilitation between 2010 and 2014 was a day, and many were accepted the day they were submitted, says Bishop.
Although this behavior may seem suspect, it wasn’t necessarily against the journals’ editorial policies. This is the peer review policy at RIDD:
BioMed Central is retracting 43 papers, following their investigation into 50 papers that raised suspicions of fake peer review, possibly involving third-party companies selling the service.
In November 2014 we wrote about fake peer reviews for Nature; at that point there had been about 110 retractions across several journals. The addition of 16 retractions by Elsevier for the same reason, and today’s 43 from BMC, brings retractions resulting from the phenomenon up to about 170.
BMC has also contacted institutions regarding 60 additional papers that were rejected for publication, but seem to be part of the same kind of scam. Regarding the third-party agents, BMC senior editor of scientific integrity Elizabeth Moylan writes: Continue reading BioMed Central retracting 43 papers for fake peer review
David Vaux, a cell biologist at the Walter + Eliza Hall Institute of Medical Research in Melbourne, explains how Nature could do more to remove bias from the peer review process. He previously wrote about his decision to retract a paper.
Last week, Nature announced that they are to offer authors of papers submitted to Nature or the monthly Nature research journals the option of having their manuscripts assessed by double-blind peer review, in which reviewers are blinded to the identity of authors and their institutions. Until now, papers sent to Nature, and most other journals, have been reviewed by a single-blind process, in which the reviewers know the identities and affiliations of the authors, but the authors are not told who the reviewers are. The goal of double-blind peer review is for submitted papers to be judged on their scientific merit alone, and thus to reduce publication bias.
While Nature should be applauded for this move, the way they have implemented it leaves room for improvement.