Heart researcher faked 70+ experiments, 100+ images

ori-logoA former researcher at the University of Michigan and the University of Chicago faked dozens of experiments and images over the course of six years, according to a new finding from the Office of Research Integrity (ORI).

Ricky Malhotra, who studied heart cells, admitted to committing misconduct at both institutions, the ORI said in its report of the case. The fakery involved three National Institutes of Health (NIH) grant applications, one NIH progress report, one paper, seven presentations, and one image file. Despite an investigation at the University of Michigan, where Malhotra was from 2005-2006, he continued this falsification at [University of Chicago], after the [University of Michigan] research misconduct investigation was completed,” according to the ORI. The agency found that he Continue reading Heart researcher faked 70+ experiments, 100+ images

Researcher who sued to stop retractions gets his sixth

Mario Saad
Mario Saad

A sixth retraction has appeared for a diabetes researcher who previously sued a publisher to try to stop his papers from being retracted.

Mario Saad‘s latest retraction, in PLOS Biology, stems from inadvertent duplications, according to the authors.  Though an investigation at Saad’s institution — the University of Campinas in Brazil — found no evidence of misconduct, a critic of the paper told The Scientist he does not believe that the issues with blots were inadvertent.

Previously, Saad sued the American Diabetes Association to remove four expressions of concern from his papers; they were later retracted, even though Unicamp recommended keeping three of them published.

Here’s the new retraction notice, for “Gut Microbiota Is a Key Modulator of Insulin Resistance in TLR 2 Knockout Mice:” Continue reading Researcher who sued to stop retractions gets his sixth

Context matters when replicating experiments, argues study

PNASBackground factors such as culture, location, population, or time of day affect the success rates of replication experiments, a new study suggests.

The study, published today in the Proceedings of the National Academy of Sciences, used data from the psychology replication project, which found only 39 out of 100 experiments live up to their original claims. The authors conclude that more “contextually sensitive” papers — those whose background factors are more likely to affect their replicability — are slightly less likely to be reproduced successfully.

They summarize their results in the paper:

Continue reading Context matters when replicating experiments, argues study

Editors say they missed “fairly obvious clues” of third party tampering, publish fake peer reviews

BJCP Cover

The editors of a journal that recently retracted a paper after the peer-review process was “compromised” have published the fake reviews, along with additional details about the case.

In the editorial titled “Organised crime against the academic peer review system,” Adam Cohen and other editors at the British Journal of Clinical Pharmacology say they missed “several fairly obvious clues that should have set alarm bells ringing.” For instance, the glowing reviews from supposed high-profile researchers at Ivy League institutions were returned within a few days, were riddled with grammar problems, and the authors had no previous publications. 

The case is one of many we’ve recently seen in which papers are pulled due to actions of a third party

The paper was submitted on August 5, 2015. From the beginning, the timing was suspect, Cohen — the director for the Centre for Human Drug Research in The Netherlands — and his colleagues note: Continue reading Editors say they missed “fairly obvious clues” of third party tampering, publish fake peer reviews

PLOS editors discussing authors’ decision to remove chronic fatigue syndrome data

After PLOS ONE allowed authors to remove a dataset from a paper on chronic fatigue syndrome, the editors are now “discussing the matter” with the researchers, given the journal’s requirements about data availability.

As Leonid Schneider reported earlier today, the 2015 paper was corrected May 18 to remove an entire dataset; the authors note that they were not allowed to publish anonymized patient data, but can release it to researchers upon request. The journal, however, requires that authors make their data fully available.

Here’s the correction notice: Continue reading PLOS editors discussing authors’ decision to remove chronic fatigue syndrome data

Software glitch — not intentional manipulation — sunk immunology paper, says author

kuo photo
A black box appears over the control lane on the left

New evidence suggests a retracted paper was felled not by intentional manipulation — as it first appeared — but by a software glitch.

In 2014, we reported that Biochemical Journal had retracted a paper on suspicion it contained “shoddy Photoshopping”  — someone appeared to have blacked out a control lane in one figure. Now there’s evidence that it wasn’t done on purpose: An investigation at Duke into eight papers, including the Biochemical Journal paper, did not find evidence of misconduct; lead author Paul Kuo, currently chair of surgery at Loyola Medicine, told us that a glitch in the software caused the black box. Nevertheless, the journal does not plan to un-retract the paper. Continue reading Software glitch — not intentional manipulation — sunk immunology paper, says author

Retractions aren’t enough: Why science has bigger problems

Andrew Gelman
Andrew Gelman

Scientific fraud isn’t what keeps Andrew Gelman, a professor of statistics at Columbia University in New York, up at night. Rather, it’s the sheer number of unreliable studies — uncorrected, unretracted — that have littered the literature. He tells us more, below.

Whatever the vast majority of retractions are, they’re a tiny fraction of the number of papers that are just wrong — by which I mean they present no good empirical evidence for their claims.

I’ve personally had to correct two of my published articles.   Continue reading Retractions aren’t enough: Why science has bigger problems

Journal pulls parasite paper over potential for patient harm

Parasitology ResearchA journal has retracted a paper about a molecular diagnosis for leishmaniasis out of concern it could lead to incorrect clinical diagnoses. 

According to Parasitology Research, all data behind the figures in the main manuscript and supporting information are correct, but the authors’ misinterpretation of the data could lead doctors to diagnose patients incorrectly. 

Let’s take a look at the retraction notice, which tells us a bit more about the nature of the problem: Continue reading Journal pulls parasite paper over potential for patient harm

Nature fixes highly cited paper suggesting food additives hurt the gut

Nature_latest coverA 2015 study about dietary emulsifiers has been corrected by Nature after another researcher pointed out a few ambiguities.

When it first appeared, the study — which showed emulsifiers cause inflammation in the guts of mice — received a fair amount of media attention, including from Nature’s own news department. But since publication, a researcher noted some imprecision around the ages of mice used in the sample, affecting the paper’s calculations of weight gain over time. Andrew Gewirtz, co-author of the study from Georgia State University, told us the change did not affect the conclusions of the paper.

Here’s the corrigendum for “Dietary emulsifiers impact the mouse gut microbiota promoting colitis and metabolic syndrome”: Continue reading Nature fixes highly cited paper suggesting food additives hurt the gut

In precedent break, BMJ explains why it rejected controversial “weekend effect” paper

After the reviewer of a rejected paper was publicly outed, the BMJ has taken the unusual step of explaining why it chose not to publish the paper.

The paper — eventually published in another journal — raised hackles for suggesting that there is no “weekend effect,” or a higher mortality rate in hospitals on Saturday and Sunday. This caught the attention of UK policy makers, who have proposed changing policies to compensate for any supposed “weekend effect.”

Amidst the heated discussion about the research, one of the reviewers was identified, along with suggestions that he may have been conflicted because he had published a study showing the opposite finding. Yesterday, the BMJ posted a blog explaining that it was the editors — and not one sole reviewer — who decided to reject the paper: Continue reading In precedent break, BMJ explains why it rejected controversial “weekend effect” paper