Retraction Watch

Tracking retractions as a window into the scientific process

Archive for the ‘studies about peer review’ Category

Welcome to the Journal of Alternative Facts. They’re the greatest! And winning!

with 17 comments

Ever since Kellyanne Conway, counselor to U.S. President Donald Trump, used the term “alternative facts” on Meet The Press earlier this month, the term — an awful euphemism for falsehoods, as many have pointed out — has become a meme. And like every new field, alternative facts needs its own journal. Enter the Twitter feed for the Journal of Alternative Facts, featuring such gems as Scientistonce, I.A. (2017), “We Have All the Best Climates, Really, They’re Great.”

We spoke to the founding editor to find out more about how they became the greatest overnight: Read the rest of this entry »

Written by Ivan Oransky

January 31st, 2017 at 2:57 pm

Do you calculate if you should accept an invite to peer review? Please stop, say journal editors

with 20 comments

Raphael Didham

Scientists are always pressed for time; still, Raphael Didham of the University Western Australia was surprised when he fell upon a group of early career scientists using a spreadsheet formula to calculate whether one was obligated to accept an invitation to review a paper, based on how many manuscripts he’d submitted for review. “I recall that sharp moment of clarity that you sometimes get when you look up from the keyboard and realise the world you (thought you) knew had changed forever,” Didham and his colleagues write in a recent editorial in Insect Conservation and Diversity. We spoke with Didham about how to convince scientists that peer reviewing is a benefit to their careers, not a burden.

Retraction Watch: You talk about the current problem of “zero-sum” reviewing. Could you define that in the context of the scientific peer review system? Read the rest of this entry »

Written by Alison McCook

January 26th, 2017 at 11:45 am

Why don’t women peer review as often as men? Fewer invites and RSVPs, researchers say

with 4 comments

Brooks Hanson

Jory Lerback. Image courtesy of the University of Utah

Women don’t peer review papers as often as men, even taking into account the skewed sex ratio in science – but why? In a new Comment in today’s Nature, Jory Lerback at the University of Utah and Brooks Hanson at the American Geophysical Union (AGU) confirmed the same trend in AGU journals, which they argue serve as a good proxy for STEM demographics in the U.S. What’s more, they found the gender discrepancies stemmed from women – of all levels of seniority — receiving fewer invitations to review (both from male and female authors). And when women get their invites, they say “no” more often. We spoke with Lerback and Hanson about what might underlie this trend, and how the scientific community should address it.

Retraction Watch: What made you decide to undertake this project?

Read the rest of this entry »

Written by Alison McCook

January 25th, 2017 at 1:00 pm

Dear Peer Reviewer: Could you also replicate the experiments? Thanks

with 14 comments

via the University of St Andrews

As if peer reviewers weren’t overburdened enough, imagine if journals asked them to also independently replicate the experiments they were reviewing? True, replication is a big problem — and always has been. At the November 2016 SpotOn conference in London, UK historian Noah Moxham of the University of St Andrews in Scotland mentioned that, in the past, some peer reviewers did replicate experiments. We asked him to expand on the phenomenon here.

Retraction Watch: During what periods in history did peer reviewers repeat experiments? And how common was the practice? Read the rest of this entry »

Written by Dalmeet Singh Chawla

January 9th, 2017 at 11:00 am

“My time and energy were stolen:” Peer reviewer reacts to retraction

with 5 comments

Martha Alibali

When a former Stanford psychology researcher lost her fifth paper last year due to unreliable results, one researcher took particular notice: Martha Alibali at the University of Wisconsin-Madison. Why? She had reviewed the 2006 paper, and took to social media to express her dismay at the result of the time and effort she spent on the research. We spoke with Alibali further about her reactions to the news.

Retraction Watch: You reviewed the paper more than 10 years ago. Can you recall what you thought about it? In retrospect, were there any red flags or doubts you had about the findings that you wish you’d caught?

Read the rest of this entry »

Written by Alison McCook

January 3rd, 2017 at 9:30 am

“Bats are really cool animals!” How a 7-year-old published a paper in a journal

with 7 comments

a_martin1-200x240

Alexandre Martin

The scientific literature has seen its share of child prodigies – such as a nine-year-old who published a study in JAMA, and a group of eight-year-olds who reported on bumblebees in Biology Letters. But Alexandre Martin of the University of Kentucky sought to help his seven-year-old son get published in a non-traditional way – by submitting his school report to a journal on Jeffrey Beall’s predatory list, the (now-defunct) International Journal of Comprehensive Research in Biological Sciences. They recount the story in a recent paper in Learned Publishing, giving young Martin his first taste of academic publishing, and helping his father expose its flaws.

Retraction Watch: As part of your experiment, you reformatted a booklet written by your seven-year-old about bats. In an excerpt in your paper, one line says “Bats are really cool animals!” The entire paper was only 153 words, according to The Times Higher Education. Did you think the paper would be accepted by the journal? Read the rest of this entry »

Written by Alison McCook

October 18th, 2016 at 11:30 am

Reviewers may rate papers differently when blinded to authors’ identities, new study says

with 8 comments

Kanu Okike

Kanu Okike

Although previous research has suggested peer reviewers are not influenced by knowing the authors’ identity and affiliation, a new Research Letter published today in JAMA suggests otherwise. In “Single-blind vs Double-blind Peer Review in the Setting of Author Prestige,” Kanu Okike at Kaiser Moanalua Medical Center in Hawaii and his colleagues created a fake manuscript submitted to Clinical Orthopaedics and Related Research (CORR), which described a prospective study about communication and safety during surgery, and included five “subtle errors.” Sixty-two experts reviewed the paper under the typical “single-blind” system, where they are told the authors’ identities and affiliations, but remain anonymous to the authors. Fifty-seven reviewers vetted the same paper under the “double-blind” system, in which they did not know who co-authored the research. We spoke with Okike about some of his unexpected results.

Retraction Watch: You found that reviewers were more likely to accept papers when they could see they were written by well-known scientists at prestigious institutions. But the difference was relatively small. Did anything about this surprise you? Read the rest of this entry »

Written by Alison McCook

September 27th, 2016 at 11:00 am

Would peer review work better if reviewers talked to each other?

with 15 comments

katherine-brown

Katherine Brown

Would distributing all reviewers’ reports for a specific paper amongst every referee before deciding whether to accept or reject a manuscript make peer review fairer and quicker? This idea — called “cross-referee commenting” — is being implemented by the journal Development, as part of its attempt to improve the peer-review process. Katherine Brown, executive editor of Development from Cambridge, UK, who co-authored a recent editorial about the phenomenon, spoke to us about the move. 

Retraction Watch: Many journals share the reviews of a particular paper with those who’ve reviewed it. What is cross-referee commenting in peer review and how is it different from current reviewing processes? Read the rest of this entry »

Written by Dalmeet Singh Chawla

September 21st, 2016 at 9:30 am

Here’s why more than 50,000 psychology studies are about to have PubPeer entries

with 16 comments

pubpeerPubPeer will see a surge of more than 50,000 entries for psychology studies in the next few weeks as part of an initiative that aims to identify statistical mistakes in academic literature.

The detection process uses the algorithm “statcheck” — which we’ve covered previously in a guest post by one of its co-developers — to scan just under 700,000 results from the large sample of psychology studies. Although the trends in Hartgerink’s present data are yet to be explored, his previous research suggests that around half of psychology papers have at least one statistical error, and one in eight have mistakes that affect their statistical conclusions. In the current effort, regardless of whether any mistakes are found, the results from the checks are then posted to PubPeer, and authors are alerted through an email.

Till now, the initiative is one of the biggest large-scale post-publication peer review efforts of its kind. Some researchers are, however, concerned about its current process of detecting potential mistakes, particularly the fact that potentially stigmatizing entries are created even if no errors are found. Read the rest of this entry »

Written by Dalmeet Singh Chawla

September 2nd, 2016 at 11:35 am

We’ve seen computer-generated fake papers get published. Now we have computer-generated fake peer reviews.

with 4 comments

Eric Medvet

Eric Medvet

Retraction Watch readers may recall that in 2014, publisher Springer and IEEE were forced to retract more than 120 conference proceedings because the papers were all fakes, written by the devilishly clever SCIgen program and somehow published after peer review. So perhaps it was inevitable that fake computer-generated peer reviews were next.

In a chapter called “Your Paper has been Accepted, Rejected, or Whatever: Automatic Generation of Scientific Paper Reviews,” a group of researchers at the University of Trieste “investigate the feasibility of a tool capable of generating fake reviews for a given scientific paper automatically.” And 30% of the time, people couldn’t tell the difference. “While a tool of this kind cannot possibly deceive any rigorous editorial procedure,” the authors conclude, “it could nevertheless find a role in several questionable scenarios and magnify the scale of scholarly frauds.”

We spoke to one of the chapter’s authors, Eric Medvet, by email.

Retraction Watch: In the paper, you test the feasibility of computer-generated fake peer reviews. Why? Read the rest of this entry »

Written by Ivan Oransky

September 2nd, 2016 at 9:30 am