Recently, a biostatistician sent an open letter to editors of 10 major science journals, urging them to pay more attention to common statistical problems with papers. Specifically, Romain-Daniel Gosselin, Founder and CEO of Biotelligences, which trains researchers in biostatistics, counted how many of 10 recent papers in each of the 10 journals contained two common problems: omitting the sample size used in experiments, as well as the tests used as part of the statistical analyses. (Short answer: Too many.) Below, we have reproduced his letter.
Despite taking some serious hits, a 2006 letter in Nature isn’t going anywhere.
Years ago, a university committee determined that two figures in the letter had been falsified. The journal chose to correct the paper, rather than retract it — and then, the next year, published a correction of that correction due to “an error in the production process.” To round it out, in June of last year, Naturepublished a rebuttal from a separate research group, who had failed to replicate the letter’s results.
Still, the first author told us there are no plans to retract the paper, since the follow up experiments published in the corrections confirmed the paper’s conclusions.
In 2011, authors of a Nature letter caught some flak for issuing a lengthy correction to a neuroscience paper that had raised eyebrows within days of publication — including some suggestions it should be retracted.
The correction notice, published months after the original letter, cited errors in image choice and labeling, but asserted the conclusions remained valid.
Now, those conclusions appear up for debate. In a recent Nature Brief Communications Arising (BCA) article, a team that raised concerns about the paper five years ago says they are unable to reproduce the results. But the authors of the original paper aren’t convinced: They argue that the BCA fails to cite important evidence, has a “complete absence or low quality of analysis,” and the scientists disregard some of their data.
A prominent pancreatic cancer researcher has lost a meeting abstract and corrected a Nature paper following an institutional investigation.
Queen Mary University of London determined that, in an abstract by Thorsten Hagemann, “elements of the study summarised by this abstract are not reliable.” Hagemann has recently issued a correction to a 2014 Nature paper he co-authored, which also cited the Queen Mary University of London (QMUL) investigation, noting there was “reason to question the provenance of the data.”
Authors have retracted a Nature paper which identified neurons that render flies sensitive to a potent insect repellent, after losing confidence in the findings. The first author, however, said she does not agree with the retraction, noting that she continues to believe the data are correct.
According to the notice, the remaining authors say they no longer support the claim that certain neurons in the antennae of fruit flies are repelled by DEET, the active ingredient in many insect repellents. The last author told us some of the paper’s results are not in doubt; nevertheless, he added, the paper would not have been published in Nature without the key conclusion, so he and most of his co-authors have pulled the paper in its entirety.
Alongside the retraction, the journal has also published a Brief Communications Arising article by scientists who were unable to reproduce the paper’s findings.
A report on the first few years of “researcher rehab” suggests that three days of intensive training have a lasting impact on participants.
Specifically, among participants — all of whom had been found guilty of at least one type of misconduct — the authors report that:
A year later, follow-up surveys indicate that the vast majority have changed how they work.
The authors claim this shows the program is worth the time and investment — a $500,000 grant from the National Institutes of Health, and a cost of $3,000 per participant for the three-day course. Do you agree? Tell us what you think in our poll at the end of the story.
If you need evidence of the value of transparency in science, check out a pair of recent corrections in the structural biology literature.
This past August, researchers led by Qiu-Xing Jiang at the University of Texas Southwestern Medical Center corrected their study, first published in February 2014 in eLife, of prion-like protein aggregates called MAVS filaments, to which they had ascribed the incorrect “helical symmetry.” In March, Richard Blumberg of Harvard Medical School, and colleagues corrected their 2014 Nature study of a protein complex called CEACAM1/TIM-3, whose structure they had attempted to solve using x-ray crystallography.
A 2015 study about dietary emulsifiers has been corrected by Nature after another researcher pointed out a few ambiguities.
When it first appeared, the study — which showed emulsifiers cause inflammation in the guts of mice — received a fair amount of media attention, including from Nature’s own news department. But since publication, a researcher noted some imprecision around the ages of mice used in the sample, affecting the paper’s calculations of weight gain over time. Andrew Gewirtz, co-author of the study from Georgia State University, told us the change did not affect the conclusions of the paper.
When a paper is retracted, how many other papers in the same field — which either cite the finding or cite other papers that do — are affected?
That’s the question examined by a study published in BioMed Central’s new journal, Research Integrity and Peer Review. Using the case of a paper retracted from Nature in 2014, the authors found that subsequent research that cites the retracted paper often repeats the problematic finding, thereby spreading it throughout the field. However, papers that indirectly cited the retracted result — by citing the papers that cited the Nature paper, but not the Nature paper itself — typically don’t repeat the retracted result, which limits its spread.