Is peer review a good way to weed out problematic papers? And if it is, which kinds of peer review? In a new paper in Scientometrics, Willem Halffman, of Radboud University, and Serge Horbach, of Radboud University and Leiden University, used our database of retractions to try to find out. We asked them several questions about the new work.
Take this case: In a court transcript from Feb. 23, 2017, Bryan Hardin testified that he was a peer reviewer on a 2016 paper in Critical Reviews in Toxicology, which found that asbestos does not increase the risk of cancer. In the deposition, Hardin—who works at the consulting firmVeritox—also said that he has testified in asbestos litigation on behalf of automakers, such as Ford, General Motors, and Chrysler, but said he had not disclosed these relationships to the journal.
Last year, the first author of the 2016 review withdrew a paper from another journal (by the same publisher) which concluded asbestos roofing products are safe, following several criticisms — including not disclosing the approving editor’s ties to the asbestos industry. In this latest case, the journal told us it believes the review process for the paper was up to snuff, but two outside experts we consulted said they believed Hardin’s relationships — and failure to disclose them — should give the journal pause.
We obtained a copy of the transcript from Christian Hartley, who was representing a man suing a mining company because the man developed cancer after being exposed to asbestos at work. When Hartley asked Hardin whether he had told the journal about testifying for companies involved in asbestos litigation, Hardin responded:
Scientists are always pressed for time; still, Raphael Didham of the University Western Australia was surprised when he fell upon a group of early career scientists using a spreadsheet formula to calculate whether one was obligated to accept an invitation to review a paper, based on how many manuscripts he’d submitted for review. “I recall that sharp moment of clarity that you sometimes get when you look up from the keyboard and realise the world you (thought you) knew had changed forever,” Didham and his colleagues write in a recent editorial in Insect Conservation and Diversity. We spoke with Didham about how to convince scientists that peer reviewing is a benefit to their careers, not a burden.
Women don’t peer review papers as often as men, even taking into account the skewed sex ratio in science – but why? In a new Comment in today’s Nature, Jory Lerback at the University of Utah and Brooks Hanson at the American Geophysical Union (AGU) confirmed the same trend in AGU journals, which they argue serve as a good proxy for STEM demographics in the U.S.What’s more, they found the gender discrepancies stemmed from women – of all levels of seniority — receiving fewer invitations to review (both from male and female authors). And when women get their invites, they say “no” more often. We spoke with Lerback and Hanson about what might underlie this trend, and how the scientific community should address it.
Retraction Watch: What made you decide to undertake this project?
As if peer reviewers weren’t overburdened enough, imagine if journals asked them to also independently replicate the experiments they were reviewing? True, replication is a big problem — and always has been. At the November 2016 SpotOn conference in London, UK historian Noah Moxham of the University of St Andrews in Scotland mentioned that, in the past, some peer reviewers did replicate experiments. We asked him to expand on the phenomenon here.
When a former Stanford psychology researcher lost her fifth paper last year due to unreliable results, one researcher took particular notice: Martha Alibali at the University of Wisconsin-Madison. Why? She had reviewed the 2006 paper, and took to social media to express her dismay at the result of the time and effort she spent on the research. We spoke with Alibali further about her reactions to the news.
Retraction Watch: You reviewed the paper more than 10 years ago. Can you recall what you thought about it? In retrospect, were there any red flags or doubts you had about the findings that you wish you’d caught?
Although previous research has suggested peer reviewers are not influenced by knowing the authors’ identity and affiliation, a new Research Letter published today in JAMA suggests otherwise. In “Single-blind vs Double-blind Peer Review in the Setting of Author Prestige,” Kanu Okike at Kaiser Moanalua Medical Center in Hawaii and his colleagues created a fake manuscript submitted to Clinical Orthopaedics and Related Research (CORR), which described a prospective study about communication and safety during surgery, and included five “subtle errors.” Sixty-two experts reviewed the paper under the typical “single-blind” system, where they are told the authors’ identities and affiliations, but remain anonymous to the authors. Fifty-seven reviewers vetted the same paper under the “double-blind” system, in which they did not know who co-authored the research. We spoke with Okike about some of his unexpected results.