It’s not unusual to hear authors bemoan the fact that a new paper doesn’t cite their work that set the stage for a scientific advance. “The journal limited me to [a seemingly abitrary number of] references,” authors sometimes shrug, with or without apology. This week, however, we found a case of that which seems to have been resolved to everyone’s satisfaction.
The authors of a September 2013 article in Nature Communications have issued a correction for the piece, which failed to cite the source of a key step in their experiment.
The article, “Val66Met polymorphism of BDNF alters prodomain structure to induce neuronal growth cone retraction,” came from the lab of William “Clay” Bracken, a biochemist at Weill Cornell Medical College. According to the abstract:
A common single-nucleotide polymorphism (SNP) in the human brain-derived neurotrophic factor (BDNF) gene results in a Val66Met substitution in the BDNF prodomain region. This SNP is associated with alterations in memory and with enhanced risk to develop depression and anxiety disorders in humans. Here we show that the isolated BDNF prodomain is detected in the hippocampus and that it can be secreted from neurons in an activity-dependent manner. Using nuclear magnetic resonance spectroscopy and circular dichroism, we find that the prodomain is intrinsically disordered, and the Val66Met substitution induces structural changes. Surprisingly, application of Met66 (but not Val66) BDNF prodomain induces acute growth cone retraction and a decrease in Rac activity in hippocampal neurons. Expression of p75NTR and differential engagement of the Met66 prodomain to the SorCS2 receptor are required for this effect. These results identify the Met66 prodomain as a new active ligand, which modulates neuronal morphology.
Trouble was, that finding wouldn’t have been possible without the previous work of another group. As the correction explains:
The glutaraldehyde fixation method used in this Article was previously published by Dieni et al. to detect BDNF propeptide, and should have been cited in the first paragraph of the Results section as follows: ‘However, glutaraldehyde fixation of proteins to the transfer membranes following sodium dodecyl sulphate polyacrylamide gel electrophoresis (SDS–PAGE) according to Dieni et al., and the use of a BDNF prodomain-specific monoclonal antibody previously characterized10, facilitated its detection in the mouse hippocampus as a 15.5-kDa band (Fig. 1a), in agreement with the findings of Dieni et al.’.
The senior author on the Dieni paper was Yves-Alain Barde, of Cardiff University. Barde, who found the gene for BDNF, told us:
The matter in question came to my attention upon reading the publication in Nature Communications by Anastasia et al. … Not least because I have known 3 of the authors for a long time I corresponded with them to try and understand why they did not mention our previous, highly related work. In particular, the method we described in Dieni et al. 2012 (see Corrigendum) was key to the detection of the BDNF pro-peptide. This was a novel finding at the time which was then confirmed by Anastasia et al. We were also hoping that our results would help clarifying issues related to the processing of pro-BDNF which had been somewhat controversial. Not least because we spent quite some time working on this particular issue with 2 of my former colleagues, I then asked Nature Communications for a Corrigendum to be added to the Anastasia et al. publication…. The authors also sent us an apology for their unintended oversight which closed the incident as far as we are concerned.
Please find below my old letter to an editor; unfortunately, the critisized authors (Helenius, Heisenberg, Gaub, Müller) did not support a correction.
====================================================
My moderate (!) LETTER TO THE EDITOR 06/2008
My following, and as I think, really moderate letter points to clear mistakes in a review of German biophysisists – which is not corrected by Fiona Watt, Editor in chief of the Journal of Cell Science.
All three of my points have been completely confirmed by independent scientific experts, but none of the critisized authors provided a publishable reply, nor did they support the publicaton of a “correcting addendum” to their clear mistakes:
– – – – – – – – – – –
Robert Eibl:
Single-receptor adhesion measurements on living cells
Helenius et al.(1) [Helenius, Heisenberg, Gaub HE, Müller] review the use of atomic force microscopy (AFM) for single-cell force spectroscopy (SCFS). In Table 1 of their report, they list the references for ‘receptor-ligand interactions by SCFS using living cells as probes’. These references include two papers detecting the rupture force of individual cell adhesion bonds of integrin alpha4beta1 (VLA-4) on cells to its ligand VCAM-1. Surprisingly, however, Helenius and co-workers have not included the work of Eibl and Benoit (2) in their commentary, although the findings concerning the same integrin to ligand interaction were published 5 and 18 months, respectively, before the two cited references appeared. In addition, Table 1 contains two smaller mistakes. First, the work of Thie et al.(3) is used as a reference for the specific measurement of LFA-1 (integrin alphaLbeta2) on its ligand ICAM-1, but these authors never claimed to specifically measure any cell adhesion receptor. To the contrary, they state that they could only speculate regarding the cell adhesion receptors involved, which might include several integrins and other cell adhesion receptors. This citation may also mislead readers with regard to several aspects of the technique for measuring leukocyte homing receptors with AFM at the single-molecule or single-receptor level, including the original developers of the approach and the time-frame in which it was developed. Second, Table 1 also includes a repeated typing error: concavalin A instead of concanavalin A. In my view, a detailed step-by-step protocol in this area could have been included in the review, too (4). For readers interested in an extensive overview of this topic, a book chapter is soon to be published that will also review this subject and include a similar table as well as further protocols for experiments (5).
(1) Helenius, J., Heisenberg, C.P., Gaub, H.E., Muller, D.J. (2008). Single-cell force spectroscopy. J. Cell. Sci. 121, 1785-91
(2) Eibl, R.H. and Benoit, M. (2004). Molecular resolution of cell adhesion forces. IEE – Nanobiotechnology 151, 128-132
(3) Thie M, Röspel R, Dettmann W, Benoit M, Ludwig M, Gaub HE, Denker HW (1998). Interactions between trophoblast and uterine epithelium: monitoring of adhesive forces. Hum Reprod. (11):3211-9
(4) Eibl, R.H. and Moy V.T. (2005). Atomic force microscopy measurements of protein-ligand interactions on living cells. In: Protein-Ligand Interactions. (Editor: G.Ulrich Nienhaus), Humana Press, Totowa, NJ, U.S.A., pp. 437-448 ISBN 1588293726
(5) Eibl, R.H. (in press). Direct force measurements of receptor-ligand interactions on living cells. In: Applied Scanning Probe Methods. Bhushan, B., Fuchs, H., Tomitori, M. (editors), Springer, Heidelberg
– – – – – – – – – – –
I have referred to this phenomenon as one type of “snub publishing”. This is a serious problem, in my view, and the cause is primarily dual: a) authors who have been slack; b) editors and “peers” who have not been thorough. You may read about my theoretical basis for this new term here:
http://www.globalsciencebooks.info/JournalsSup/images/2013/AAJPSB_7(SI1)/AAJPSB_7(SI1)35-37.pdf
I have exemplified, using a limited set of papers in the Anthurium literature, how this applies:
http://link.springer.com/article/10.1007/s12109-014-9355-6 (happy to provide the PDF file)
The next step is to now approach editors and publishers for justice and corrective measures. Furthermore, we desperately need, as one way to evolve post-publication peer review (http://journal.frontiersin.org/Journal/10.3389/fpls.2013.00485/full), similar analyses, accross small sub-sets of specialized literature, analyzed in minute details.
Interestingly, the lab of one of the authors excludes several of his students on the lab home page who did not acknowledge my pioniering work and also removed a publication for which I asked for retraction, but neither the editor nor the corresponding author (of that other paper) want to retract the paper not acknowledging my idea, previous and strictly confidential results…
There really should be a system of pos-publication review to get rid of questionable papers, authors and even some plagiators.
I don’t agree that lack of citation is generally a problem. One of my pet peeves is a reviewer who wants me to cite a peripherally-related paper (no doubt of their own). Another of my pet peeves is scientists who are obsessed by their citations.
Formally, we are not citing other work in order to “give credit”. We are citing papers in order to support specific statements in our manuscripts. For every factual statement in a paper the authors must either a) support the statement with evidence presented in the manuscript, or b) cite another paper which presents the supporting evidence.
The only other proviso is that you cannot claim original work of others as your own. But in this paper under consideration here, the authors may have very well thought that gluteraldehyde fixation is sufficiently well known that citation of the other paper was not required.
Of course it should be the choice of the authors who to cite and who -actively- not. In my case the authors did not even seem to have read or understood all the papers coming from their labs – and therefore might have cited them totally wrong, just in the opposite way the paper could not make any claims and even totally excluded the specificity of the molecules and receptors involved (I later was the one who introduced specificity in that lab). Perhaps I should say: sorry, that my father is not a professor, so I should not expect to get support for an academic career in Germany.
I get dinged by reviewers, regularly, for the number of papers I cite; I do not overly cite myself.
I was taught that if the methods you used was Bradford, you cite Bradford, if you use a modified method of Bradford, you cite both. In addition to your a) and b), you must add c) if you read a paper that gave you the idea, you cite it.
There are two different labs that have each cited one of my papers, once, and then in all future papers cite their own work, so that people will only know that I did the original work. It is very nasty when groups hijack your work, by not citing the underlying original reports. If this happened by one large well funded group I would think it was a one off, but two different groups have taken over two different observations and reports of mine.
An analogous situation occurred recently with the journal Psychological Science. A set of authors claimed to have “discovered that waving one’s own hand in front of one’s covered eyes can cause visual sensations of motion”, http://pss.sagepub.com/content/early/2013/10/28/0956797613497968.abstract, but similar work had been done in the early 70s and 80s (I was co-authored of the ’83 report). After alerting the journal, the authors issued a correction that included what in my view is a somewhat dismissive distinction between the earlier ‘subjective reports’ and their later ‘objective evidence “that seeing hand motion in the dark is genuinely perceptual in nature and attributable to known mechanisms of brain functioning”, http://pss.sagepub.com/content/25/3/842.full.pdf+html. Admittedly, one of their studies included a clever control condition but, in my view, their studies merely confirmed what had already been found in the literature.
I found a paper of myself cited in support of an equation, despite the paper in question heavily criticising that equation. The corresponding author of the offending paper was kind enough to correct (=remove) the citation, but not the use of the improper equation.
Happens all the time. People don’t quote prior stuff, ON PURPOSE, so that they can inflate the perceived novelty of these findings. If you freak out, and send a nasty letter to the Editor, a correction can be forced, but it takes a lot of time and effort, and, in the end, is it really worth it? Here is another example. They only ‘forgot’ that prior studies have already implicated this particular pathway in this particular disease.
http://www.nature.com/ni/journal/v11/n1/full/ni0110-97c.html
Partly the referees are to be blamed, they should do their homework and make sure the authors don’t get away with sneaky stuff like this.
” People don’t quote prior stuff, ON PURPOSE, so that they can inflate the perceived novelty of these findings.”
Are you sure that that is the case here? The novel technique was gluteraldehyde fixation of proteins.
OK, in this case it could have been sloppiness too. Or perhaps a result of some sort of animosity between these two groups. But in my experience – both as a referee and as an author – most of the times it’s a sneaky attempt to make the findings look more novel / more original.
Can anyone really expect that every scientist has read and remembered the full contents of every paper ever published? Can two horses not arrive at the same town by taking two different roads? And really, do we expect that every single method/derivative finding is going to be traced back to the source paper? Should I cite Mullis in every paper using a PCR?
Yes, getting cited is fun, it means that someone somewhere cares enough to have at least copied your name out of a sentence from someone elses paper or maybe even read your abstract… and yes, citations count in some way shape form for jobs…
But going to an editor and requesting a correction because you think that someone failed to cite your own paper? That, my friends, is the definition of petty.
I would think such a thing would be too trivial for RW. A missed citation does not need to be “news”. imho
A citation missed “by purpose” may mislead readers with regard to several aspects of the technique, its original developers and the time-frame in which it was developed. In my experience it is also a typical behaviour of some German biophysicists to not acknowledge the original work, but receiving millions of EUR from tax payers money and pretending having had the original idea. So it is extremely important that “Retraction Watch” informs about similar mistakes – and how this could be handled in a good way by Nature.
I totally agree with Eibl that missing a key citation denies proper credit to those who make a significant contribution to the phenomena under study. There are many reasons why a specific relevant citation is missed (e.g., it appears in an obscure journal, it is inadvertently missed because there is a lot of relevant literature), but when there is consensus that a missed key citation should have been included, such omission constitutes an error and such errors certainly merit some form of correction.
Well, the name of the site is retraction watch, not citation correction. While I completely understand your point in general, I still don’t think it belongs on this site. How many papers have been retracted due to a bad or missing citation?
Retraction Watch usually covers – as the name implies – the circumstances of a retraction; this is often related to scientific dishonesty. Therefore, avoiding of a crucial publication, especially the first pioniers in a field may not lead to a retraction, but is almost as bad as plagiarism. It can totally harm – by purpose – the career of real scientists. Typically, such professors and their institutes then help to get professors sons on reviews which have good impact factors, so that 5 years later those copycats appear to be the originators of the idea, the experimental planning and the first to measure, for example, chemokine-mediated integrin activation with AFM on a single-molecule level on a living cell… and many similar projects.
Well stated Eibl. I don’t think that it necessarily has to cause a negative impact on a scientist’s career to be considered problematic. The fact that such “sloppiness” or “manipulation”, even if perceived as “innocent mistakes” can twist and skew the literature is significant because we are potentially talking about thousands of scientists doing this in tens of thousands of papers that could be affecting as many, or even in the millions of citations. If we allow this culture of reference manipulaton to be inculcated, and not controlled, or verified, also because the traditional peer system is already over-wrked and over-loaded to worry about such “trivia”, we could potentially have situations in which country A with a scientific population 10-fold that of country B could easily manipulate the citations to suit its geo-political needs (hypothesis). This is a serious issue that I have already brought to the attention of Thomson Reuters, but my trivial ideas are, as always, set to the side. At the end of the day, the fate of all this bickering may be in vain, something akin to the movie Legends of the Fall.
In Germany and elsewhere, many professors may feel the pressure to promote other professors sons to a professorship, so they sometimes may have to use any approach to not support the originators of research, that may include writing reviews for the professors kids, but also not to acknowledge the real scientists behind the lab work – by repeated prupose.
True, and behind this lies an elitism based on academic families. I still don’t think one citation is going to make or break a career. Has anyone ever followed a citation trial? I once tracked down a citation of the dry weight of a particular protein found in a certain tissue. It turns out that the trial went bad about 30 years ago when someone falsely quoted an old paper. That bad quote has been cited over 30 times..bad data with it. The original work was 70 years ago and everyone has been falsely quoting it for 30 years. So yes, I understand the importance. Still don’t think it belongs here.
Unfortunately, criminal trials in Germany should be started within three years, but the University needed three years to check for plagiarism…
John, I am doing such a very painful and time-consuming analysis in the plant sciences, and am working on about 6 genera simultaneously. The signs are very, very bad and the issue is not easy to quantify because usually two key parameters are missing: a) the thoughts and feelings of the author when they wrote their paper; b) the mind-set of the “peers” when they made their “accept” assessment. I have already documented some of those concerns for anthurium. What we need is for specialists to pick up small batches of literature that surround very specific topics that can make literature analysis manageable. Writing a review sometimes is useful because it allows for a deep and fine-scale analysis to take place. Just yesterday, I received a critique from one peer reviewer who asked why I had included the reference of a retracted paper in the review. I indicated because it was not to give that individual additional citations, but to draw the fact that the claims that were had been made by that individual were no longer valid, making a retracted paper almost as important, in terms of the scientific message, as a published paper. Let’s see if the peer review actually understands my logic, and approves of it.
Followers of this thread might be interested in reading the following news item from Nature, http://www.nature.com/news/sperm-rna-carries-marks-of-trauma-1.15049 and the comments by Abhay Sharma. I cannot judge the merits of Sharma’s claims, but I do wonder if we will soon be reading another entry in RW titled “Lack of citation prompts correction in second Nature journal”?
“Give credit where credit is due” must be strictly adhered to. It is all the more important in nascent areas, transgenerational epigenetic inheritance, for example, otherwise confusion created at this stage may spread quickly and soon disfigure the body of scientific literature. My previous comment to this effect may interest the followers of this thread: http://www.nature.com/nature/journal/v467/n7318/full/nature09491.html?utm_source=&utm_medium=&utm_campaign=