“We were devastated:” Authors retract paper after realizing they had used the wrong mice

Raymond Pasek and Maureen Gannon

Longtime readers of Retraction Watch may recall a 2011 post about a research team that retracted a paper after realizing that they had ordered the wrong mice. Maureen Gannon and Raymond Pasek of Vanderbilt University contacted us earlier this week to alert us to a similar case: Their retraction, earlier this month, of a 2016 paper from American Journal of Physiology – Endocrinology and Metabolism after discovering that “a colleague from another lab had mistakenly supplied us with the wrong transgenic mouse line.”

We strongly believe that sharing this example will encourage other researchers to do the right thing when a mistake is discovered and promote academic integrity,” they wrote. So we asked them to answer a few questions about their experience with “Connective tissue growth factor is critical for proper β-cell function and pregnancy-induced β-cell hyperplasia in adult mice,” a paper that has been cited twice, according to Clarivate Analytics’ Web of Science

Retraction Watch: How, and when, did you become aware of the error? Continue reading “We were devastated:” Authors retract paper after realizing they had used the wrong mice

Weekend reads: The ‘Journal Grand Master,’ what drives online attention to studies; a song of replication

The week at Retraction Watch featured a story of unintended consequences and a broken relationship, and a retraction for a paper that had just about everything wrong with it. Here’s what was happening elsewhere: Continue reading Weekend reads: The ‘Journal Grand Master,’ what drives online attention to studies; a song of replication

Weekend reads: Why a vice-chancellor uses Impact Factors; plagiarizing principals; time to publish less?

The week at Retraction Watch featured the tale of a scientist whose explanations for misconduct kept changing, and revelations in a big legal case involving Duke University. Here’s what was happening elsewhere: Continue reading Weekend reads: Why a vice-chancellor uses Impact Factors; plagiarizing principals; time to publish less?

Weekend reads: Death penalty for scientific fraud?; Why criticism is good; Cash for publishing

The week at Retraction Watch featured revelations about a case of misconduct at the University of Colorado Denver, and the case of a do-over that led to a retraction. Here’s what was happening elsewhere:

Continue reading Weekend reads: Death penalty for scientific fraud?; Why criticism is good; Cash for publishing

Weekend reads: Science’s citation problem; researcher rehab; a strange new journal

The week at Retraction Watch featured the resignation of a researcher found to have fudged data in a study of Crossfit, and allegations of bullying by a scientist who wouldn’t let a trainee publish a paper. Here’s what was happening elsewhere: Continue reading Weekend reads: Science’s citation problem; researcher rehab; a strange new journal

Quick: What does fish food have to do with X-rays? In this case, an Elsevier production error

An MRI of a fish, not involved in this study. (via Wikimedia)

In 2012, a study claiming to show — after some intentional statistical tricks — that a dead salmon had brain activity in an fMRI won a prestigious (and hilarious) Ig Nobel Prize.

So five years later, when Bálint Botz tweeted wryly about a study of fish and plants in a radiology journal, we thought, “Aha, someone is trying to create another red herring!”

But alas, it turns out the reason a journal normally concerned with X-rays would suddenly be interested in aquaponics was far more prosaic: Continue reading Quick: What does fish food have to do with X-rays? In this case, an Elsevier production error

Weekend reads: A demand for a CRISPR paper retraction; a weak data-sharing policy; can we trust journals?

The week at Retraction Watch featured a study suggesting that 2% of studies in eight medical journals contained suspect data, and the announcement of a retraction on a professor’s blog. Here’s what was happening elsewhere: Continue reading Weekend reads: A demand for a CRISPR paper retraction; a weak data-sharing policy; can we trust journals?

Two in 100 clinical trials in eight major journals likely contain inaccurate data: Study

A sweeping analysis of more than 5,000 papers in eight leading medical journals has found compelling evidence of suspect data in roughly 2% of randomized controlled clinical trials in those journals.

Although the analysis, by John Carlisle, an anesthetist in the United Kingdom, could not determine whether the concerning data were tainted by misconduct or sloppiness, it suggests that editors of the journals have some investigating to do. Of the 98 studies identified by the method, only 16 have already been retracted. [See update at end.]

The types of studies analyzed — randomized controlled clinical trials — are considered the gold standard of medical evidence, and tend to be the basis for drug approvals and changes in clinical practice. Carlisle, according to an editorial by John Loadsman and Tim McCulloch accompanying the new study published today in Anesthesia, Continue reading Two in 100 clinical trials in eight major journals likely contain inaccurate data: Study

Weekend reads: ‘Pile of dung’ republished; Diverging views on publishing negative results; Economists share regrets

The week at Retraction Watch featured an unusual warning from the New England Journal of Medicine, and the withdrawal of a paper over a fear of legal threats. Here’s what was happening elsewhere: Continue reading Weekend reads: ‘Pile of dung’ republished; Diverging views on publishing negative results; Economists share regrets

RAND withdraws report on child welfare reform for further analysis

Last week, Emily Putnam-Hornstein, an associate professor at the University of Southern California, was reading what seemed like a noteworthy new report from the RAND Corporation on the child welfare system. But then she realized that some of the key estimates were off. When she sent the report to some colleagues, they agreed.

Curious, Putnam-Hornstein and some of her colleagues tuned into a RAND webinar on Thursday, May 25, to discuss the report, Improving Child Welfare Outcomes: Balancing Investments in Prevention and Treatment, which had been released two days earlier. They asked the report’s lead author, Jeanne Ringel, about the numbers, and Ringel responded by saying they were on-target. (Ringel recalls acknowledging that the numbers were conservative, but that revised inputs would not change the overall results substantially.) The Pritzker Foundation, which had funded the study, also dismissed the concerns.

Ringel, however, contacted Putnam-Hornstein to suggest a phone call. The Memorial Day holiday weekend was just about underway, so the call was scheduled for Wednesday, the 31st. In the meantime, Putnam-Hornstein and other researchers drafted a letter explaining their concerns. A conference call happened on the 31st, during which the critics shared their concerns, and also said that they’d publish the letter online if the report was not retracted swiftly.

Apparently, the critics were persuasive:

Continue reading RAND withdraws report on child welfare reform for further analysis