Time for a scientific journal Reproducibility Index
Retraction Watch readers are by now more than likely familiar with the growing concerns over reproducibility in science. In response to issues in fields from cancer research to psychology, scientists have come up with programs such as the Reproducibility Initiative and the Open Science Framework.
These sorts of efforts are important experiments in ensuring that findings are robust. We think there’s another potential way to encourage reproducibility: Giving journals an incentive to publish results that hold up.
to supplement the standard “Impact Factor” – a gauge of how often papers are cited by other papers, which journals use to create a hierarchy of prestige.
As we note, the “Transparency Index won’t solve the problem of bad data.” But we’d like to suggest another that could help: the Reproducibility Index.
Rather than rate journals on how often their articles are cited by other researchers, let’s grade them on how well those papers stand the most important test of science: namely, does the work stand up to scrutiny?
The idea is to encourage “slow science” and careful peer review, whilst discouraging journals from publishing papers based on flimsy results that are likely to be heavily cited. Like the Transparency Index, the Reproducibility Index could supplement the impact factor. In fact, one way to judge average reproducibility would be to calculate what percentage of citations for a given paper shows replication versus inability to reproduce results.
As Brenda Maddox, wife of the late Nature editor John Maddox, said of her former husband:
Someone once asked him, ‘how much of what you print is wrong?’ referring to Nature. John answered immediately, ‘all of it. That’s what science is about – new knowledge constantly arriving to correct the old.’
But journals — particularly the high-impact ones — seem to need an incentive to publish such correctives. The Reproducibility Index could “take into account how often journals are willing to publish replications and negative findings,” as Science did in a case we discuss in the new column. What else might it include? As we conclude:
There are, of course, a lot of details to work out and we look forward to help on doing that from readers of Lab Times and Retraction Watch. Isn’t the reproducibility of science worth it?