At last, cancer reproducibility project releases some results — and they’re mixed

Nearly five years ago, researchers suggested that the vast majority of preclinical cancer research wouldn’t hold up to follow-up experiments, delaying much needed treatments for patients. In a series of articles publishing tomorrow morning, eLife has released the results of the first five attempts to replicate experiments in cancer biology — and the results are decidedly mixed.

As our co-founders Adam Marcus and Ivan Oransky write in STAT, the overall take-home message was that two studies generated findings similar to the original, one did not replicate the original, and two others were inconclusive.

They quote Brian Nosek, a psychologist at the University of Virginia, in Charlottesville, who runs the Center for Open Science, who has been leading the replication effort in his own field:

Reproducibility is hard, and once you fail to reproduce something it isn’t always obvious why.

The new articles, added Nosek,

are additional evidence that we don’t quite understand this as well as we thought we did.

(Full disclosure: Retraction Watch is partnering with the Center on a database of retractions.)

This is just the first wave of results: The authors plan on performing and publishing more than 20 replication efforts. And they had hoped for more, Marcus and Oransky write:

The effort isn’t cheap: Each replication took an average of nearly seven months to complete, at an average cost of about $27,000, according to data from the investigators.  (Budget concerns had earlier forced the group to cut from 50 to 37 the number of trials they could re-run.)

The effort would be easier if researchers cooperated, they add:

Although many researchers the group contacted were gracious and eager to help – providing additional information about their methods – others were less forthcoming. One group took 111 days to respond; another ignored the request.

In an accompanying editorial publishing tomorrow, Nosek and Timothy Errington, the “metascience manager” at the Center for Open Science, caution that being able to replicate a result doesn’t automatically mean the result is correct — and vice versa:

Scientific claims gain credibility by accumulating evidence from multiple experiments, and a single study cannot provide conclusive evidence for or against a claim. Equally, a single replication cannot make a definitive statement about the original finding. However, the new evidence provided by a replication can increase or decrease confidence in the reproducibility of the original finding.

Indeed, as an accompanying editorial notes, one of the studies whose replication efforts produced somewhat inconclusive results has led to a clinical trial for an antibody therapy. Whether or not it is effective in patients remains to be seen.

For Adam and Ivan’s whole column, click here.

Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our daily digest. Click here to review our Comments Policy. For a sneak peek at what we’re working on, click here.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.