By now, Retraction Watch readers may have heard about new Nobel laureate Randy Schekman’s pledge to boycott Cell, Nature, and Science — sometimes referred to the “glamour journals” — because they damage and distort science. Schekman has used the bully pulpit of the Nobels to spark a conversation that science dearly needs to have about the cult of the impact factor.
The argument isn’t airtight. Schekman — now editor of eLife, an open access journal — says that open access journals are a better way to go, although he doesn’t really connect mode of publishing with the quality of what’s published. Others have pointed out that the move will punish junior members of his lab while likely having no effect on the career of someone who has published dozens of studies in the three journals he’s criticizing, and has, well, won a Nobel.
All that aside, it was Schekman’s reference to retractions that, not surprisingly, caught our eye:
In extreme cases, the lure of the luxury journal can encourage the cutting of corners, and contribute to the escalating number of papers that are retracted as flawed or fraudulent. Science alone has recently retracted high-profile papers reporting cloned human embryos, links between littering and violence, and the genetic profiles of centenarians. Perhaps worse, it has not retracted claims that a microbe is able to use arsenic in its DNA instead of phosphorus, despite overwhelming scientific criticism.
Although the first sentence is an inference — one that others have also argued — everything else Schekman says in this paragraph is true. But just how many retractions have these journals had? And how does that compare to the number in the Proceedings of the National Academy of Sciences (PNAS) while one Randy Schekman was editor? Here’s the 2006-2011 data, analyzed according to Ferric Fang and Arturo Casadevall’s Retraction Index, which calculates the rate of retraction per 1,000 papers published:
So yes, PNAS had a lower Retraction Index than the other journals, but not really that much lower than Nature. Put another way, however, PNAS retracted 23 papers from 2006 to 2011, while Cell, Nature, and Science retracted 28. And perhaps even more important, there were 1,300 retractions in journals other than those four.
“Wait,” you’re saying, “are more retractions really a bad thing? Didn’t you just publish a post about a study that said the opposite?” Well yes, yes we did. But Schekman is suggesting retractions are a mark against a journal, which we think makes PNAS’s record of retractions fair game.
When Schekman refers to the fact that Science hasn’t retracted the arsenic life paper, he invites readers to review the history of requests for retractions at PNAS. And that doesn’t look all that different from the journals he’s criticizing.
We covered this retraction of a cheetah fossil find, for example, which took four years, three of which were during Schekman’s tenure. Along the way, PNAS wouldn’t even publish a letter of critique. Compare that with the multiple letters — and two replication attempts — Science published about the arsenic life paper.
Or take this example, chronicled by UCLA researcher Andrew Diener. Diener raised concerns about a paper in late 2006, and after an exchange of emails with the authors and the journal, managing editor Daniel Salsbury told him:
We have now received the authors’ response to your comments. A member of the Editorial Board has re-read the paper and the comments from you and the authors. I have included the authors’ reply and encourage you to contact them directly regarding any additional concerns you may have with their work. The Editorial Board has concluded that no additional action is required by the authors at this time.
Diener wasn’t satisfied with that, so he wrote the journal again. Then two years went by, and he found the paper retracted. So PNAS eventually did the right thing. But in those two years, the paper picked up ten citations.
None of this proves that PNAS under Schekman had a worse record than did Cell, Science, or Nature. It does, however, suggest the picture may be a bit more complicated than his Guardian piece let on.
In the end, though, it’s hard to disagree with Schekman’s conclusion:
Funders and universities, too, have a role to play. They must tell the committees that decide on grants and positions not to judge papers by where they are published. It is the quality of the science, not the journal’s brand, that matters.
In fact, even Nature editor in chief Philip Campbell agrees.