Retraction Watch

Tracking retractions as a window into the scientific process

Archive for the ‘studies about peer review’ Category

Dear Peer Reviewer: Could you also replicate the experiments? Thanks

with 14 comments

via the University of St Andrews

As if peer reviewers weren’t overburdened enough, imagine if journals asked them to also independently replicate the experiments they were reviewing? True, replication is a big problem — and always has been. At the November 2016 SpotOn conference in London, UK historian Noah Moxham of the University of St Andrews in Scotland mentioned that, in the past, some peer reviewers did replicate experiments. We asked him to expand on the phenomenon here.

Retraction Watch: During what periods in history did peer reviewers repeat experiments? And how common was the practice? Read the rest of this entry »

Written by Dalmeet Singh Chawla

January 9th, 2017 at 11:00 am

“My time and energy were stolen:” Peer reviewer reacts to retraction

with 5 comments

Martha Alibali

When a former Stanford psychology researcher lost her fifth paper last year due to unreliable results, one researcher took particular notice: Martha Alibali at the University of Wisconsin-Madison. Why? She had reviewed the 2006 paper, and took to social media to express her dismay at the result of the time and effort she spent on the research. We spoke with Alibali further about her reactions to the news.

Retraction Watch: You reviewed the paper more than 10 years ago. Can you recall what you thought about it? In retrospect, were there any red flags or doubts you had about the findings that you wish you’d caught?

Read the rest of this entry »

Written by Alison McCook

January 3rd, 2017 at 9:30 am

“Bats are really cool animals!” How a 7-year-old published a paper in a journal

with 7 comments

a_martin1-200x240

Alexandre Martin

The scientific literature has seen its share of child prodigies – such as a nine-year-old who published a study in JAMA, and a group of eight-year-olds who reported on bumblebees in Biology Letters. But Alexandre Martin of the University of Kentucky sought to help his seven-year-old son get published in a non-traditional way – by submitting his school report to a journal on Jeffrey Beall’s predatory list, the (now-defunct) International Journal of Comprehensive Research in Biological Sciences. They recount the story in a recent paper in Learned Publishing, giving young Martin his first taste of academic publishing, and helping his father expose its flaws.

Retraction Watch: As part of your experiment, you reformatted a booklet written by your seven-year-old about bats. In an excerpt in your paper, one line says “Bats are really cool animals!” The entire paper was only 153 words, according to The Times Higher Education. Did you think the paper would be accepted by the journal? Read the rest of this entry »

Written by Alison McCook

October 18th, 2016 at 11:30 am

Reviewers may rate papers differently when blinded to authors’ identities, new study says

with 8 comments

Kanu Okike

Kanu Okike

Although previous research has suggested peer reviewers are not influenced by knowing the authors’ identity and affiliation, a new Research Letter published today in JAMA suggests otherwise. In “Single-blind vs Double-blind Peer Review in the Setting of Author Prestige,” Kanu Okike at Kaiser Moanalua Medical Center in Hawaii and his colleagues created a fake manuscript submitted to Clinical Orthopaedics and Related Research (CORR), which described a prospective study about communication and safety during surgery, and included five “subtle errors.” Sixty-two experts reviewed the paper under the typical “single-blind” system, where they are told the authors’ identities and affiliations, but remain anonymous to the authors. Fifty-seven reviewers vetted the same paper under the “double-blind” system, in which they did not know who co-authored the research. We spoke with Okike about some of his unexpected results.

Retraction Watch: You found that reviewers were more likely to accept papers when they could see they were written by well-known scientists at prestigious institutions. But the difference was relatively small. Did anything about this surprise you? Read the rest of this entry »

Written by Alison McCook

September 27th, 2016 at 11:00 am

Would peer review work better if reviewers talked to each other?

with 15 comments

katherine-brown

Katherine Brown

Would distributing all reviewers’ reports for a specific paper amongst every referee before deciding whether to accept or reject a manuscript make peer review fairer and quicker? This idea — called “cross-referee commenting” — is being implemented by the journal Development, as part of its attempt to improve the peer-review process. Katherine Brown, executive editor of Development from Cambridge, UK, who co-authored a recent editorial about the phenomenon, spoke to us about the move. 

Retraction Watch: Many journals share the reviews of a particular paper with those who’ve reviewed it. What is cross-referee commenting in peer review and how is it different from current reviewing processes? Read the rest of this entry »

Written by Dalmeet Singh Chawla

September 21st, 2016 at 9:30 am

Here’s why more than 50,000 psychology studies are about to have PubPeer entries

with 16 comments

pubpeerPubPeer will see a surge of more than 50,000 entries for psychology studies in the next few weeks as part of an initiative that aims to identify statistical mistakes in academic literature.

The detection process uses the algorithm “statcheck” — which we’ve covered previously in a guest post by one of its co-developers — to scan just under 700,000 results from the large sample of psychology studies. Although the trends in Hartgerink’s present data are yet to be explored, his previous research suggests that around half of psychology papers have at least one statistical error, and one in eight have mistakes that affect their statistical conclusions. In the current effort, regardless of whether any mistakes are found, the results from the checks are then posted to PubPeer, and authors are alerted through an email.

Till now, the initiative is one of the biggest large-scale post-publication peer review efforts of its kind. Some researchers are, however, concerned about its current process of detecting potential mistakes, particularly the fact that potentially stigmatizing entries are created even if no errors are found. Read the rest of this entry »

Written by Dalmeet Singh Chawla

September 2nd, 2016 at 11:35 am

We’ve seen computer-generated fake papers get published. Now we have computer-generated fake peer reviews.

with 4 comments

Eric Medvet

Eric Medvet

Retraction Watch readers may recall that in 2014, publisher Springer and IEEE were forced to retract more than 120 conference proceedings because the papers were all fakes, written by the devilishly clever SCIgen program and somehow published after peer review. So perhaps it was inevitable that fake computer-generated peer reviews were next.

In a chapter called “Your Paper has been Accepted, Rejected, or Whatever: Automatic Generation of Scientific Paper Reviews,” a group of researchers at the University of Trieste “investigate the feasibility of a tool capable of generating fake reviews for a given scientific paper automatically.” And 30% of the time, people couldn’t tell the difference. “While a tool of this kind cannot possibly deceive any rigorous editorial procedure,” the authors conclude, “it could nevertheless find a role in several questionable scenarios and magnify the scale of scholarly frauds.”

We spoke to one of the chapter’s authors, Eric Medvet, by email.

Retraction Watch: In the paper, you test the feasibility of computer-generated fake peer reviews. Why? Read the rest of this entry »

Written by Ivan Oransky

September 2nd, 2016 at 9:30 am

Should systematic reviewers report suspected misconduct?

with 7 comments

BMJ Open

Authors of systematic review articles sometimes overlook misconduct and conflicts of interest present in the research they are analyzing, according to a recent study published in BMJ Open.

During the study, researchers reviewed 118 systematic reviews published in 2013 in four high-profile medical journals — Annals of Internal Medicine, the British Medical Journal, The Journal of the American Medical Association and The Lancet. In addition, the authors contacted review authors to ask additional questions; 80 (69%) responded. The review included whether the authors had followed certain procedures to ensure the integrity of the data they were compiling, such as checking for duplicate publications, and analyzing if the authors’ conflicts of interest may have impacted the findings. 

Carrying out a systematic review involves collecting and critically analyzing multiple studies in the same area. It’s especially useful for accumulating and weighing conflicting or supporting evidence by multiple research groups. A byproduct of the process is that it can also help spot odd practices such duplication of publicationsRead the rest of this entry »

Written by Dalmeet Singh Chawla

August 16th, 2016 at 11:30 am

We’re blinded by positive results. So what if we removed them?

with 4 comments

Mike Findley

Mike Findley

The problem of publication bias — giving higher marks to a paper that reports positive results rather than judging it on its design or methods — plagues the scientific literature. So if reviewers are too focused on the results of a paper, would stripping a paper of its findings solve the problem? That was the question explored in a recent experiment by guest editors of Comparative Political Studies. Mike Findley, an associate professor at the University of Texas at Austin and one of the guest editors of the journal, talked to us about a new paper explaining what they learned.

Retraction Watch: Can you explain what a “results-free” paper looks and reads like? Read the rest of this entry »

Written by Dalmeet Singh Chawla

August 15th, 2016 at 2:00 pm

Recognize “gotcha” peer reviews? This editor can

with 7 comments

Neil Herndon

Neil Herndon

Ever read a review where the editor or reviewer seems to be specifically looking for reasons to reject a paper? Neil Herndon, the editor-in-chief of the Journal of Marketing Channels, from the South China University of Technology in Guangzhou, has. In a recent editorial, Herndon calls this type of review “gotcha” peer reviewing, and presents an alternative. 

Retraction Watch: What is “gotcha” reviewing?  What is its purpose and who is practicing it for the most part? Read the rest of this entry »

Written by Dalmeet Singh Chawla

July 28th, 2016 at 2:00 pm