The article, “The food industry and conflicts of interest in nutrition research: A Latin American perspective,” was published October 29 and raised concerns about the conflicts of interest that can occur when a food company pairs with a public health organization. Specifically, the article critiqued the supposed relationship between the biggest beverage distributor in Guatemala and the leading Guatemala-based public health organization, aligned to distribute a fortified supplement for undernourished children.
Today, we’re excited to announce that our parent organization, The Center For Scientific Integrity (CSI), has partnered with The Center For Open Science (COS) to create that database on the Open Science Framework (OSF).
It’s a natural collaboration, says Retraction Watch co-founder and CSI executive director Ivan Oransky:
Look at the selection criteria for any major funding agency, and you will find it aims to support research that is “ground-breaking,” “innovative,” “high-risk,” and “at the frontiers of knowledge.”
But are these criteria delivering the best science? Think about the “reproducibility crisis,” familiar to many Retraction Watch readers: Evidence is growing that a high proportion of published research findings are not robust. This is bad news for funders; irreproducible research is a waste of money, and actually impedes scientific progress by filling the literature with irreproducible false-positive findings that, once published, never die.
A major source of irreproducibility comes from research that is funded but never reported. As I have noted previously, many researchers have a backlog of unpublished findings. All too often, they sit on a mountain of data that is unpublished simply because it is not the most exciting thing on their desk, and they need to be working on a new project in order to remain competitive. Negative results – e.g. where a promising treatment shows no effect, or an anticipated association between a genotype and phenotype fails to emerge — are likely to end up in the file drawer. By lingering in obscurity, they contribute to publication bias and the consequent distortion of the truth.
In October, the Academy of Medical Sciences (AMS) published a report considering reasons for irreproducibility in biomedical research and ways to overcome them. It was clear that the problem was not down to any one cause, and that a range of solutions needed to be considered — some bottom-up (such as better training of researchers), and some top-down, driven by institutions, publishers and, the focus of this post, funders.
“The Patient, a 60-years old Caucasian male found unconscious in a trailer park of gypsies…”
So begins a strange — and apparently not copyedited — new case report in the World Journal of Emergency Surgery. The paper concerns a patient — perhaps we should call him Rasputin — who showed up with a bullet in his left lung but no entry wound that would explain its presence.
Naturally, the authors draw the obvious conclusions:
Did you recently log onto your favorite journal’s website and see this? (For anyone who doesn’t want to bother clicking, it’s the video from Rick Astley’s “Never Gonna Give You Up.”) If so, your favorite journal was hijacked.
We’re pleased to present a guest post from Michèle B. Nuijten, a PhD student at Tilburg University who helped develop a program called “statcheck,” which automatically spots statistical mistakes in psychology papers, making it significantly easier to find flaws. Nuijten writes about how such a program came about, and its implications for other fields.
Readers of Retraction Watch know that the literature contains way too many errors – to a great extent, as some research suggests, in my field of psychology. And there is evidence that problem is only likely to get worse.
To reliably investigate these claims, we wanted to study reporting inconsistencies at a large scale. However, extracting statistical results from papers and recalculating the p-values is not only very tedious, it also takes a LOT of time.
So we created a program known as “statcheck” to do the checking for us, by automatically extracting statistics from papers and recalculating p-values. Unfortunately, we recently found that our suspicions were correct: Half of the papers in psychology contain at least one statistical reporting inconsistency, and one in eight papers contain an inconsistency that might have affected the statistical conclusion.
An investigation at St. Jude Children’s Hospital into “irregularities” in a figure featured in a neuroblastoma paper has concluded that the image was fabricated. The paper, published in Surgery in 2012, was retracted on Friday.
Here’s the full retraction notice for “Liposome-encapsulated curcumin suppresses neuroblastoma growth through nuclear factor-kappa B inhibition:”
A paper published in August that caught the media’s eye for concluding that feeling sad influences how you see colors has been retracted, after the authors identified problems that undermined their findings.
The authors explain the problems in a detailed retraction note released today by Psychological Science. They note that they found sadness influenced how people see blues and yellows but not reds and greens, but they needed to compare those findings to each other in order to prove the validity of the conclusion. And once they performed that additional test, the conclusion no longer held up.
In the retraction note for “Sadness impairs color perception,” the editor reinforces that there was no foul play:
Waseda University has revoked the doctorate degree of the first author on the now-retracted Nature papers about a technique to create stem cells.
The technique — which claimed to provide a new way to nudge young cells from mice into pluripotency — was initially described in two 2014 Nature papers, both first-authored by Haruko Obokata. However, the papers were soon mired in controversy, corrected, then retracted later that year due to “several critical errors,” some of which were categorized by a RIKEN investigation as misconduct.