Retraction Watch

Tracking retractions as a window into the scientific process

Archive for the ‘studies about peer review’ Category

Here’s why more than 50,000 psychology studies are about to have PubPeer entries

with 16 comments

pubpeerPubPeer will see a surge of more than 50,000 entries for psychology studies in the next few weeks as part of an initiative that aims to identify statistical mistakes in academic literature.

The detection process uses the algorithm “statcheck” — which we’ve covered previously in a guest post by one of its co-developers — to scan just under 700,000 results from the large sample of psychology studies. Although the trends in Hartgerink’s present data are yet to be explored, his previous research suggests that around half of psychology papers have at least one statistical error, and one in eight have mistakes that affect their statistical conclusions. In the current effort, regardless of whether any mistakes are found, the results from the checks are then posted to PubPeer, and authors are alerted through an email.

Till now, the initiative is one of the biggest large-scale post-publication peer review efforts of its kind. Some researchers are, however, concerned about its current process of detecting potential mistakes, particularly the fact that potentially stigmatizing entries are created even if no errors are found. Read the rest of this entry »

Written by Dalmeet Singh Chawla

September 2nd, 2016 at 11:35 am

We’ve seen computer-generated fake papers get published. Now we have computer-generated fake peer reviews.

with 4 comments

Eric Medvet

Eric Medvet

Retraction Watch readers may recall that in 2014, publisher Springer and IEEE were forced to retract more than 120 conference proceedings because the papers were all fakes, written by the devilishly clever SCIgen program and somehow published after peer review. So perhaps it was inevitable that fake computer-generated peer reviews were next.

In a chapter called “Your Paper has been Accepted, Rejected, or Whatever: Automatic Generation of Scientific Paper Reviews,” a group of researchers at the University of Trieste “investigate the feasibility of a tool capable of generating fake reviews for a given scientific paper automatically.” And 30% of the time, people couldn’t tell the difference. “While a tool of this kind cannot possibly deceive any rigorous editorial procedure,” the authors conclude, “it could nevertheless find a role in several questionable scenarios and magnify the scale of scholarly frauds.”

We spoke to one of the chapter’s authors, Eric Medvet, by email.

Retraction Watch: In the paper, you test the feasibility of computer-generated fake peer reviews. Why? Read the rest of this entry »

Written by Ivan Oransky

September 2nd, 2016 at 9:30 am

Should systematic reviewers report suspected misconduct?

with 7 comments

BMJ Open

Authors of systematic review articles sometimes overlook misconduct and conflicts of interest present in the research they are analyzing, according to a recent study published in BMJ Open.

During the study, researchers reviewed 118 systematic reviews published in 2013 in four high-profile medical journals — Annals of Internal Medicine, the British Medical Journal, The Journal of the American Medical Association and The Lancet. In addition, the authors contacted review authors to ask additional questions; 80 (69%) responded. The review included whether the authors had followed certain procedures to ensure the integrity of the data they were compiling, such as checking for duplicate publications, and analyzing if the authors’ conflicts of interest may have impacted the findings. 

Carrying out a systematic review involves collecting and critically analyzing multiple studies in the same area. It’s especially useful for accumulating and weighing conflicting or supporting evidence by multiple research groups. A byproduct of the process is that it can also help spot odd practices such duplication of publicationsRead the rest of this entry »

Written by Dalmeet Singh Chawla

August 16th, 2016 at 11:30 am

We’re blinded by positive results. So what if we removed them?

with 4 comments

Mike Findley

Mike Findley

The problem of publication bias — giving higher marks to a paper that reports positive results rather than judging it on its design or methods — plagues the scientific literature. So if reviewers are too focused on the results of a paper, would stripping a paper of its findings solve the problem? That was the question explored in a recent experiment by guest editors of Comparative Political Studies. Mike Findley, an associate professor at the University of Texas at Austin and one of the guest editors of the journal, talked to us about a new paper explaining what they learned.

Retraction Watch: Can you explain what a “results-free” paper looks and reads like? Read the rest of this entry »

Written by Dalmeet Singh Chawla

August 15th, 2016 at 2:00 pm

Recognize “gotcha” peer reviews? This editor can

with 7 comments

Neil Herndon

Neil Herndon

Ever read a review where the editor or reviewer seems to be specifically looking for reasons to reject a paper? Neil Herndon, the editor-in-chief of the Journal of Marketing Channels, from the South China University of Technology in Guangzhou, has. In a recent editorial, Herndon calls this type of review “gotcha” peer reviewing, and presents an alternative. 

Retraction Watch: What is “gotcha” reviewing?  What is its purpose and who is practicing it for the most part? Read the rest of this entry »

Written by Dalmeet Singh Chawla

July 28th, 2016 at 2:00 pm

From annoying to bitter, here are the six types of peer reviewers

with 13 comments

Urban Geography

After two decades of submitting papers to journals, and more than 10 years of serving on an editorial board or editing journals, geography researcher Kevin Ward knows a thing or two about peer review.

Recently, as the editor of Urban Geography, he received a particularly “grumpy” and “obnoxious” review in his inbox, which got him thinking. Although, he says, the review raised “professionally appropriate issues,” it went well beyond the widely accepted content and tone. Ward, therefore, decided to reflect on his two decades of experience, and decipher the different types of reviewers and their characteristics.

In all, Ward — from the University of Manchester in the UK — says he’s encountered six types of referees.

Here’s the first, according to his recent editorial published in Urban Geography: 

Read the rest of this entry »

Written by Dalmeet Singh Chawla

July 25th, 2016 at 9:30 am

Do publishers add value? Maybe little, suggests preprint study of preprints

with 18 comments

ArXiv

Academic publishers argue they add value to manuscripts by coordinating the peer-review process and editing manuscripts — but a new preliminary study suggests otherwise.

The study — which is yet to be peer reviewed — found that papers published in traditional journals don’t change much from their preprint versions, suggesting publishers aren’t having as much of an influence as they claim. However, two experts who reviewed the paper for us said they have some doubts about the methods, as it uses “crude” metrics to compare preprints to final manuscripts, and some preprints get updated over time to include changes from peer-reviewers and the journal.

The paper, posted recently on ArXiv, compared the text in over 12,000 preprint papers published on ArXiv from February 2015 to their corresponding papers published in journals after peer review.

The authors report in their paper, “Comparing published scientific journal articles to their pre-print versions:” Read the rest of this entry »

Written by Dalmeet Singh Chawla

June 24th, 2016 at 8:30 am

Do interventions to reduce misconduct actually work? Maybe not, says new report

with 14 comments

Elizabeth Wager and Ana Marusic

Can we teach good behavior in the lab? That’s the premise behind a number of interventions aimed at improving research integrity, invested in by universities across the world and even private companies. Trouble is, a new review from the Cochrane Library shows that there is little good evidence to show these interventions work. We spoke with authors Elizabeth Wager (on the board of directors of our parent organization) and Ana Marusic, at the University of Split School of Medicine in Croatia.

Retraction Watch: Let’s start by talking about what you found – looking at 31 studies (including 15 randomized controlled trials) that included more than 9500 participants, you saw there was some evidence that training in research integrity had some effects on participants’ attitudes, but “minimal (or short-lived) effects on their knowledge.” Can you talk more about that, including why the interventions had little impact on knowledge? Read the rest of this entry »

Written by Alison McCook

April 12th, 2016 at 2:00 pm

What if we tried to replicate papers before they’re published?

with 12 comments

Martin Schweinsberg

Martin Schweinsberg

Eric Uhlmann

Eric Uhlmann

We all know replicability is a problem – consistently, many papers in various fields fail to replicate when put to the test. But instead of testing findings after they’ve gone through the rigorous and laborious process of publication, why not verify them beforehand, so that only replicable findings make their way into the literature? That is the principle behind a recent initiative called The Pipeline Project (covered in The Atlantic today), in which 25 labs checked 10 unpublished studies from the lab of one researcher in social psychology. We spoke with that researcher, Eric Uhlmann (also last author on the paper), and first author Martin Schweinsberg, both based at INSEAD.

Retraction Watch: What made you decide to embark upon this project? Read the rest of this entry »

Written by Alison McCook

March 31st, 2016 at 2:00 pm

“Evidence-based medicine has been hijacked:” A confession from John Ioannidis

with 22 comments

ioannidis

John Ioannidis

John Ioannidis is perhaps best known for a 2005 paper “Why Most Published Research Findings Are False.” One of the most highly cited researchers in the world, Ioannidis, a professor at Stanford, has built a career in the field of meta-research. Earlier this month, he published a heartfelt and provocative essay in the the Journal of Clinical Epidemiology titled “Evidence-Based Medicine Has Been Hijacked: A Report to David Sackett.” In it, he carries on a conversation begun in 2004 with Sackett, who died last May and was widely considered the father of evidence-based medicine. We asked Ioannidis to expand on his comments in the essay, including why he believes he is a “failure.”

Retraction Watch: You write that as evidence-based medicine “became more influential, it was also hijacked to serve agendas different from what it originally aimed for.” Can you elaborate? Read the rest of this entry »

Written by Ivan Oransky

March 16th, 2016 at 2:00 pm