Nearly five years ago, researchers suggested that the vast majority of preclinical cancer research wouldn’t hold up to follow-up experiments, delaying much needed treatments for patients. In a series of articles publishing tomorrow morning, eLife has released the results of the first five attempts to replicate experiments in cancer biology — and the results are decidedly mixed.
As our co-founders Adam Marcus and Ivan Oransky write in STAT, the overall take-home message was that two studies generated findings similar to the original, one did not replicate the original, and two others were inconclusive.
They quote Brian Nosek, a psychologist at the University of Virginia, in Charlottesville, who runs the Center for Open Science, who has been leading the replication effort in his own field:
Continue reading At last, cancer reproducibility project releases some results — and they’re mixed
If you need evidence of the value of transparency in science, check out a pair of recent corrections in the structural biology literature.
This past August, researchers led by Qiu-Xing Jiang at the University of Texas Southwestern Medical Center corrected their study, first published in February 2014 in eLife, of prion-like protein aggregates called MAVS filaments, to which they had ascribed the incorrect “helical symmetry.” In March, Richard Blumberg of Harvard Medical School, and colleagues corrected their 2014 Nature study of a protein complex called CEACAM1/TIM-3, whose structure they had attempted to solve using x-ray crystallography.
In both cases, external researchers were able to download and reanalyze the authors’ own data from public data repositories, making it quickly apparent what had gone wrong and how it needed to be fixed — highlighting the very best of a scientific process that is supposed to be self-correcting and collaborative. Continue reading Structural biology corrections highlight best of the scientific process
A review of preclinical research of a now widely used cancer drug suggests the studies contain multiple methodology flaws and overestimate the benefits of the drug.
Specifically, the researchers found that most studies didn’t randomize treatments, didn’t blind investigators to which animals were receiving the drug, and tested tumors in only one animal model, which limits the applicability of the results. Importantly, they also found evidence that publication bias — keeping studies that found no evidence of benefit from the drug, sunitinib, out of the literature — may have caused a significant overestimation of its effects on various tumor types.
Together, these findings suggest the need for a set of standards that cancer researchers follow, the journal notes in a “digest” of the paper, published Tuesday by eLife:
Continue reading Much of preclinical research into one cancer drug is flawed, says report