Here’s another installment of PubPeer Selections:
- Citation, schmitation: “This paper cites two papers of mine, that happen to be completely unrelated to the topic discussed, as far as I can see.”
- “I hope this settles; practice makes perfect, and the samples would have probably been ordered better on the gel had it was run today, with the figure’s structure in mind,” says the first author of a 2009 Nature Biotechnology paper.
- “We have also contacted the journal to request that updates are provided,” write the authors of a 2014 Nature paper, “including on these methods and to clarify information which may have caused misunderstanding here.”
- A question about a 2011 paper in Cell leads to a discussion of morpholinos and CRISPR.
- “Therefore, nothing that is raised in the paper is contradicted by the studies cited, to my knowledge,” says one of the authors of a 2012 Nature paper, responding to questions.
Like Retraction Watch? Consider supporting our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, and sign up on our homepage for an email every time there’s a new post.
The sad part about the first case, is the statement made by the author, François-Xavier Coudert: “The editor has replied that the authors confirm their choice of citations is deliberate, and it closes the matter for them.” Maybe indirect justice will be served by the apparent figure duplication detected in the same paper by Raphaël Lévy on January 26, 2015.
The importance of paying attention to the relevance of the reference list is further fortified by the 5th case, Atasoy et al. (2012) in Nature.
And, on the issue of Nature, I just submitted a 5-page Comment there, and got the following response from the senior editor, Dr. Patrick Goymer:
“We do not doubt the technical quality of your study or its interest to others working in this and related areas of research. However, after consideration, we are not persuaded that your findings represent a sufficiently outstanding scientific advance to justify publication in Nature.”
After reading the PubPeer entries here at RW, I just had to laugh at this rejection which took less time to pronounce than the submission process itself.
To be perfectly honest, the e-mail does have a confidentiality statement which I am quoting in inverted commas, as I feel that such statements are in the interest of the wider scientific community, to spur debate:
“Confidentiality Statement:
This e-mail is confidential and subject to copyright. Any unauthorised use or disclosure of its contents is prohibited.”
With regard to the morpholino vs. CRISPR issue, compensatory gene expression changes have been observed in “permanent” KO animals for many years. Transient knockdown with siRNA/morphs is almost always a better option. Personally, I have never understood why KO animals have always been considered the gold standard for mechanistic studies. Some KO mice that I’ve worked with are so screwed up that any results with those animals should be interpreted very cautiously or even disregarded. Take Atg5 and Atg7 deficient mice, for example: they have chronic underlying tissue injury that results in massive upregulation of Nrf2-regulated genes and other protective genes. There’s no way that data from those animals have anything to do with reality. Not to mention the fact that 3D structure of DNA in one spot can affect expression of distal genes, meaning that deleting or inserting a chunk of DNA can alter expression of other genes that aren’t even physically very close in the genome.
That said, my philosophy is that science is always in progress and we have to draw the best possible conclusions from the best available data. If no other data are available than KO studies, then it’s OK to go with that. But KO mice will never be the be-all end-all of science for me.
I’m indexing webpages with discussions comparing morphants & engineered mutants. Links are on my blog: http://www.gene-tools.com/content/websites-about-morphant-and-mutant-comparisons
The first pubpeer selection seems a particularly extreme case of a much broader problem: the pervasive misuse of citations. Sometimes, the choice of citations seems to be better explained by the availability bias and the probable reviewers rather than by the relevance and the solidity of the cited research. This underscores the problem of false precision of bibliometric metrics.