How should journals update papers when new findings come out?

NEJM Logo

When authors get new data that revise a previous report, what should they do?

In the case of a 2015 lung cancer drug study in the New England Journal of Medicine (NEJM), the journal published a letter to the editor with the updated findings.

Shortly after the paper was published, a pharmaceutical company released new data showing the drug wasn’t quite as effective as it had seemed. Once the authors included the new data in their analysis, they adjusted their original response rate of 59%  — hailed as one of a few “encouraging results” in an NEJM editorial at the time of publication — to 45%, as they write in the letter. One of the authors told us they published the 2015 paper using less “mature” data because the drug’s benefits appeared so promising, raising questions about when to publish “exciting but still evolving data.”

It’s not a correction, as the original paper has not been changed; it doesn’t even contain a flag that it’s been updated. But among the online letters about the paper is one from the authors, “Update to Rociletinib Data with the RECIST Confirmed Response Rate,” which provides the new data and backstory:

Continue reading How should journals update papers when new findings come out?

Duplicated data gets corrected — not retracted — by psych journal

Psychological Science

A psychology journal is correcting a paper for reusing data. The editor told us the paper is a “piecemeal publication,” not a duplicate, and is distinct enough from the previous article that it is not “grounds for retraction.”

The authors tracked the health and mood of 65 patients over nine weeks. In one paper, they concluded that measures of physical well being and psychosocial well being positively predict one another; in the other (the now corrected paper), they concluded that health and mood (along with positive emotions) influence each other in a self-sustaining dynamic.

As a press release for the now-corrected paper put it: Continue reading Duplicated data gets corrected — not retracted — by psych journal

Heart researcher faked 70+ experiments, 100+ images

ori-logoA former researcher at the University of Michigan and the University of Chicago faked dozens of experiments and images over the course of six years, according to a new finding from the Office of Research Integrity (ORI).

Ricky Malhotra, who studied heart cells, admitted to committing misconduct at both institutions, the ORI said in its report of the case. The fakery involved three National Institutes of Health (NIH) grant applications, one NIH progress report, one paper, seven presentations, and one image file. Despite an investigation at the University of Michigan, where Malhotra was from 2005-2006, he continued this falsification at [University of Chicago], after the [University of Michigan] research misconduct investigation was completed,” according to the ORI. The agency found that he Continue reading Heart researcher faked 70+ experiments, 100+ images

Context matters when replicating experiments, argues study

PNASBackground factors such as culture, location, population, or time of day affect the success rates of replication experiments, a new study suggests.

The study, published today in the Proceedings of the National Academy of Sciences, used data from the psychology replication project, which found only 39 out of 100 experiments live up to their original claims. The authors conclude that more “contextually sensitive” papers — those whose background factors are more likely to affect their replicability — are slightly less likely to be reproduced successfully.

They summarize their results in the paper:

Continue reading Context matters when replicating experiments, argues study

Editors say they missed “fairly obvious clues” of third party tampering, publish fake peer reviews

BJCP Cover

The editors of a journal that recently retracted a paper after the peer-review process was “compromised” have published the fake reviews, along with additional details about the case.

In the editorial titled “Organised crime against the academic peer review system,” Adam Cohen and other editors at the British Journal of Clinical Pharmacology say they missed “several fairly obvious clues that should have set alarm bells ringing.” For instance, the glowing reviews from supposed high-profile researchers at Ivy League institutions were returned within a few days, were riddled with grammar problems, and the authors had no previous publications. 

The case is one of many we’ve recently seen in which papers are pulled due to actions of a third party

The paper was submitted on August 5, 2015. From the beginning, the timing was suspect, Cohen — the director for the Centre for Human Drug Research in The Netherlands — and his colleagues note: Continue reading Editors say they missed “fairly obvious clues” of third party tampering, publish fake peer reviews

Authors retract non-reproducible Cell paper

CellAuthors have retracted a paper from Cell after they were unable to reproduce data in two figures, compromising their confidence in some of the findings.

The authors revisited their experiments after another lab was unable to replicate their data, about proteins that may play a role in lung cancer.

The first author told Nature News in 2013 that the paper may have helped her secure her current position at the Novartis Institutes for Biomedical Research in Massachusetts.

Pulling “Cytohesins are cytoplasmic ErbB receptor activators” appears to be a case of doing the right thing, given the detailed retraction notice:

Continue reading Authors retract non-reproducible Cell paper

Structural biology corrections highlight best of the scientific process

Nature_latest-coverIf you need evidence of the value of transparency in science, check out a pair of recent corrections in the structural biology literature.

This past August, researchers led by Qiu-Xing Jiang at the University of Texas Southwestern Medical Center corrected their study, first published in February 2014 in eLife, of prion-like protein aggregates called MAVS filaments, to which they had ascribed the incorrect “helical symmetry.” In March, Richard Blumberg of Harvard Medical School, and colleagues corrected their 2014 Nature study of a protein complex called CEACAM1/TIM-3, whose structure they had attempted to solve using x-ray crystallography.

In both cases, external researchers were able to download and reanalyze the authors’ own data from public data repositories, making it quickly apparent what had gone wrong and how it needed to be fixed — highlighting the very best of a scientific process that is supposed to be self-correcting and collaborative. Continue reading Structural biology corrections highlight best of the scientific process

Software glitch — not intentional manipulation — sunk immunology paper, says author

kuo photo
A black box appears over the control lane on the left

New evidence suggests a retracted paper was felled not by intentional manipulation — as it first appeared — but by a software glitch.

In 2014, we reported that Biochemical Journal had retracted a paper on suspicion it contained “shoddy Photoshopping”  — someone appeared to have blacked out a control lane in one figure. Now there’s evidence that it wasn’t done on purpose: An investigation at Duke into eight papers, including the Biochemical Journal paper, did not find evidence of misconduct; lead author Paul Kuo, currently chair of surgery at Loyola Medicine, told us that a glitch in the software caused the black box. Nevertheless, the journal does not plan to un-retract the paper. Continue reading Software glitch — not intentional manipulation — sunk immunology paper, says author

Retractions aren’t enough: Why science has bigger problems

Andrew Gelman
Andrew Gelman

Scientific fraud isn’t what keeps Andrew Gelman, a professor of statistics at Columbia University in New York, up at night. Rather, it’s the sheer number of unreliable studies — uncorrected, unretracted — that have littered the literature. He tells us more, below.

Whatever the vast majority of retractions are, they’re a tiny fraction of the number of papers that are just wrong — by which I mean they present no good empirical evidence for their claims.

I’ve personally had to correct two of my published articles.   Continue reading Retractions aren’t enough: Why science has bigger problems

Nature fixes highly cited paper suggesting food additives hurt the gut

Nature_latest coverA 2015 study about dietary emulsifiers has been corrected by Nature after another researcher pointed out a few ambiguities.

When it first appeared, the study — which showed emulsifiers cause inflammation in the guts of mice — received a fair amount of media attention, including from Nature’s own news department. But since publication, a researcher noted some imprecision around the ages of mice used in the sample, affecting the paper’s calculations of weight gain over time. Andrew Gewirtz, co-author of the study from Georgia State University, told us the change did not affect the conclusions of the paper.

Here’s the corrigendum for “Dietary emulsifiers impact the mouse gut microbiota promoting colitis and metabolic syndrome”: Continue reading Nature fixes highly cited paper suggesting food additives hurt the gut