PubPeer Selections: Odd citations, “practice makes perfect,” a Nature update

pubpeerHere’s another installment of PubPeer Selections:

Like Retraction Watch? Consider supporting our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, and sign up on our homepage for an email every time there’s a new post.

5 thoughts on “PubPeer Selections: Odd citations, “practice makes perfect,” a Nature update”

  1. The sad part about the first case, is the statement made by the author, François-Xavier Coudert: “The editor has replied that the authors confirm their choice of citations is deliberate, and it closes the matter for them.” Maybe indirect justice will be served by the apparent figure duplication detected in the same paper by Raphaël Lévy on January 26, 2015.

    The importance of paying attention to the relevance of the reference list is further fortified by the 5th case, Atasoy et al. (2012) in Nature.

  2. And, on the issue of Nature, I just submitted a 5-page Comment there, and got the following response from the senior editor, Dr. Patrick Goymer:
    “We do not doubt the technical quality of your study or its interest to others working in this and related areas of research. However, after consideration, we are not persuaded that your findings represent a sufficiently outstanding scientific advance to justify publication in Nature.”

    After reading the PubPeer entries here at RW, I just had to laugh at this rejection which took less time to pronounce than the submission process itself.

    To be perfectly honest, the e-mail does have a confidentiality statement which I am quoting in inverted commas, as I feel that such statements are in the interest of the wider scientific community, to spur debate:
    “Confidentiality Statement:
    This e-mail is confidential and subject to copyright. Any unauthorised use or disclosure of its contents is prohibited.”

  3. With regard to the morpholino vs. CRISPR issue, compensatory gene expression changes have been observed in “permanent” KO animals for many years. Transient knockdown with siRNA/morphs is almost always a better option. Personally, I have never understood why KO animals have always been considered the gold standard for mechanistic studies. Some KO mice that I’ve worked with are so screwed up that any results with those animals should be interpreted very cautiously or even disregarded. Take Atg5 and Atg7 deficient mice, for example: they have chronic underlying tissue injury that results in massive upregulation of Nrf2-regulated genes and other protective genes. There’s no way that data from those animals have anything to do with reality. Not to mention the fact that 3D structure of DNA in one spot can affect expression of distal genes, meaning that deleting or inserting a chunk of DNA can alter expression of other genes that aren’t even physically very close in the genome.

    That said, my philosophy is that science is always in progress and we have to draw the best possible conclusions from the best available data. If no other data are available than KO studies, then it’s OK to go with that. But KO mice will never be the be-all end-all of science for me.

  4. The first pubpeer selection seems a particularly extreme case of a much broader problem: the pervasive misuse of citations. Sometimes, the choice of citations seems to be better explained by the availability bias and the probable reviewers rather than by the relevance and the solidity of the cited research. This underscores the problem of false precision of bibliometric metrics.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.