Here’s a good example of a retraction done the right way (we think).
The Journal of the American Chemical Society has retracted — at the behest of the principal investigator — a 2008 article by a group of researchers whose subsequent studies undermined their confidence in the validity of their initial findings.
The article was titled “Cooperative melting in caged dimers of rigid small molecule DNA-hybrids,” and it came from the lab of SonBinh Nguyen, of Northwestern University. As the paper’s abstract stated:
Rigid small-molecule DNA hybrids (rSMDHs) have been synthesized with three DNA strands attached to a rigid tris(phenylacetylene) core. When combined under dilute conditions, complementary rSMDHs form cage dimers that melt at >10 degrees C higher and much sharper than either unmodified DNA duplexes or rSMDH aggregates formed at higher concentrations. With a 2.97 average number of cooperative duplexes, these caged dimers constitute the first example of cooperative melting in well-defined DNA-small-molecule structures, demonstrating the important roles that local geometry and ion concentration play in the hybridization/dehybridization of DNA-based materials.
But the retraction notice tells a different story:
Over the past few years, efforts in the laboratory of the corresponding author (S. T. Nguyen) have overwhelmingly suggested that the reported melting data for small molecule-DNA hybrids, as presented in Table 1 and Figure 2 of this publication, are incorrect. While small molecule-DNA hybrids do display sharper melting profiles compared to those for unmodified DNA duplexes, the levels of cooperativity are more modest, as shown in our subsequent works.(1, 2) As a result, the corresponding author withdraws this publication and regrets any trouble it might have caused.
The study has been cited 15 times, according to Thomson Scientific’s Web of Knowledge.
Hat tip: Neil Withers
I’m not sure I agree with retracting a paper because subsequent studies disagree with original results. If there was something clearly wrong with the reporting (e.g., wrong figures inserted) or some sort of FFP, then that makes sense, but retracting because science did what it is supposed to, which is to support or refute previous results, seems inappropriate. Do we retract everything that new results don’t support?
When a paper just appeared as hard copy between the covers of a journal, a retraction might not have been much use. But these days, someone discovering the paper by some kind of e-search would immediately have it flagged up that the paper is wrong, or at least dubious. Kudos to the PI for following the correct professional approach in this case.
I agree with awbrown. On the basis of what we know, there seems no grounds for retraction.
What if I do similar experiments and get results consistent with the original paper? I’m no longer supposed to cite it since it has been retracted. Retracting this paper does not clarify anything in the published literature.
Maybe the now retracted paper is becoming a bar to publication. If each new paper lead to reviewers comments saying “you say this now, but then you said different” perhaps it became easier just to retract?
I’d still prefer that this kind of article stay in the literature if the experiments were themselves sound, but this seems like a reason you might retract it.
I don’t know. Sounds like they are saying they did the same experiment over and over again and the result is completely different, which means either the people who did it in the first place are incompetent, or they were fudging things. To me, since all the authors in the subsequent papers are different except for the PI, it suggests funny business. So I disagree that it’s a clear notice. The PI probably suspects funny business from one of the earlier authors but has no outright proof since it’s probably easy to just make up melting point numbers and enter them completely wrong in the notebook. They have subsequent papers on this, and if manipulation or incompetence is suspected, a retraction is in order. The notice is not saying that it’s an honest scientific mistake that could be repeated in the future, the way I read it.
I also completely agree with awbrown. If you apply some statistical thinking (as I am sure chemists must too, to some extent), then even if you have an effect “out there”, you will not always observe it to the same extent for each particular sampling of the world out there you engage in. This has to do with the probabilistic nature of measurement. I haven’t read the original paper and am not a chemist and therefore may miss something crucial. But I think in the case of these authors, the proper way to deal with non-replication — IF IT’S NOT DUE TO A IDENTIFIABLE FLAW IN THE ORIGINAL STUDY — would be to state in later papers something along the lines of: “We weren’t able to confirm this effect in the present research and suggest that this may be due X or Y in the original work or perhaps Z in the present research”. Who knows? Perhaps the divergent results will ultimately reveal something interesting about the properties of those small-hybrid DNA molecules…