So what should happen to scientific papers that are proven wrong?

There’s been a lively discussion at Jeff Perkel’s guest post from this morning, “Should Linus Pauling’s erroneous 1953 model of DNA be retracted?” Most of our commenters say “no.” Some of those “nos” are quite emphatic, suggesting that Retraction Watch should brush up on epistemology, or that this was a silly question to begin with.

We appreciate all the feedback, of course, and thought this would be a good opportunity to expand the answers a bit from “yes” and “no” — which a few commenters have begun doing. So we’re posting this poll about what should happen to papers such as Pauling’s that are proven to be wrong, knowing that they continue to be cited as if they had no significant flaws. (Pauling’s, as Perkel pointed out, was actually wrong about at least one thing even when it was published, but leave that aside for these purposes.) Vote here:

[polldaddy poll=6348342]

66 thoughts on “So what should happen to scientific papers that are proven wrong?”

    1. That’s covered by all of the options except removing them from the literature entirely, yes?

  1. “well-informed scientists will know”? religion is never too far away… 🙂

    I’ve voted for the added notice, that would say simply “obsolete”.

    1. Well informed scientists all started out as ignorant newbies and scientists have an obligation to spread info everywhere: not just within the “well informed” elite with the privilege of unlimited time and money to just “know”… It concerns me how elitist and popular this answer is!

      1. It’s called “looking at the articles that cite this one.” Articles do not exist in a vacuum. It’s a pretty basic mistake to read a single article and not look at where it falls in the context of the literature.

        Research articles are not written to teach. They’re not written like Wikipedia articles. They are written for people who are conducting research.

        And, anyone conducting research has to understand the context of an article in the literature, which includes: (a) who has cited the article, (b) what the citations are for, (c) whether the experiments are reliable, (d) whether there has been follow-up work, and (e) whether the interpretation agrees with current consensus, is controversial, or appears (to at least the reader) to just be wrong.

        Research articles are the primary documents of scientific research, and are not written for a general audience. Crafting every research article for a general audience would be a complete waste of time. What you’re asking for is better suited for review articles or textbooks, which already fulfill this role quite nicely.

        Which, is to say, that I am of the opinion that the record of research is probably as important as the results of research. Scientific articles should only be retracted from the literature if there is fraud. Even apparently-unreproducible results can often be of importance. Reproducible data with a wrong interpretation can be tremendously important.

        We don’t know where we’re going in this endeavor, or how we’ll get there, but we sure as hell better know where we’ve been.

    2. I think retraction is the only possible and clean way. An example: one study was published in the International Archives of Occupational and Environmetal Health on DNA breaks by mobile phone electromagnetic fields (Schwarz et al., 81: 755-767, 2008). Despite the facts that a) the host institution classified the data as fabricated and published this in three press releases in 2008, b) the author responsible for the data fabrication left the university right after she was caught fabricating similar data (also in 2008), c) the lab book contained the codes for unblinding the “blinded” exposure / sham exposure chambers back to 2005, and d) that the journal (in 2008) published and Expression of Concern saying that the Editors “apologize” (!) for having published this paper and that they no longer trust the results and the discussion (!!), the paper was not retracted. It was cited 19 times (so far) from 2009 – 2012. Hope my point is clear. Sorry for this one long sentence.

  2. Generally the 2nd or 3rd option, depending on how much of the conclusions are wrong, why it was wrong (e.g. was it based on a widely accepted theory that later turned out to be untrue?), etc.

  3. If you are citing Pauling’s article on DNA without knowing about Watson and Crick. What The Hell. What if the common view that was accepted, for argument sake, Pauling’s model. Many years later we find out he was wrong. But Watson’s article was retracted because we thought it was wrong. Their article would have been lost to us.

  4. I’d be horrified if there was any kind of movement towards somehow branding papers judged to be incorrect. Sure, some papers are unambiguously incorrect. How many are partly incorrect and partly correct? How many are considered wrong by many or most authors in the field but not universally? How many were once considered incorrect but were subsequently shown to be correct after all? Who would get to decide when a paper should be given its scarlet letter?

    Scientists not familiar enough with their field to know which results have general consensus and which are thought to be flawed do not tend to get very far. Attempting to create some means of deciding for scientists which results they should use and which they should ignore seems like a recipe for disaster to me.

    1. I’m not positive I agree that the only people we need to consider are established or well-mentored scientists — undergrads, patients, and loonies read journals, too. I don’t think the “do nothing” option suits them very well, but I agree that a formal mark of disapproval would be a disaster to implement.

      I’d have to think that the answer would be something like F1000 or some other form of post-publication peer review. Convincing journals to link to F1000 or Research Blogging content could go a long way.

  5. As long as it is the conclusions that are proven wrong, not the data itself, the paper should stay. Science is not about collecting absolute truths, it’s about sugggesting and refuting hypotheses. If we remove those hypotheses, what happens to the papers refuting them ? Should they stand without context ? Or be retracted as well ? I also don’t agree that they are obsolete, they are stilll valid and important as records of the scientific process.

    Adding corrective notices to 90 % of the scientific literature is hardly doable, and besides, what absolute authority would issue such ‘corrections’, and who would make sure those don’t become obsolete ?

  6. None of the above. As one commenter noted some months ago (apologies for not remembering who or on which post), what’s needed is the equivalent of Shepard’s Citations for science. We can already find if an article has been cited. Just add a one-letter code, if applicable, for agreed, disagreed, or distinguished. Leave formal retraction for cases involving wrong-doing or where the authors really wish they’d never said whatever it was in the first place. Some additional points:

    The science model is used for lots of things which are not susceptible to bit-like, true-false characterization. That’s because it’s a pretty good way of doing things, even if it isn’t a perfect fit. So, when is the paper wrong enough to be disapproved and who decides?

    Labeling papers as wrong comes perilously close to science by consensus. Consensus is an illegitimate short-cut for reasoning and evidence. No help for that if you’re a non-expert. A non-expert has to rely on consensus (e.g. review articles); but giving some official status to the supposed consensus should be avoided at all costs.

    Outside of the most competitive areas of biomed, there’s usually more than one experimental series per paper. How many have to be wrong for the paper to receive the Scarlet R?

    Do we want to give incentives for scientists to get all legalistic with qualifications, caveats and forms of wording to avoid conclusions being branded as erroneous? Anyone who has ever read an opinion letter by a licensed professional of any kind will know why this is undesirable.

    1. Personal experience tells me that looking at citations doesn’t work either. I have a database with over 350 articles of which most make a very basic mistake (described in all textbooks!), and several of those papers will have a result that is (almost) entirely due to this experimental artifact. There are several that are cited more than 30-40 times, one is even above 200, and those are not citations in disagreement. A whole field gone awry of late.

      In a recent Nature commentary (just ask if I need to find it for you, I presently don’t have it at hand) it was pointed out that only 15% or so of the papers describing new leads in cancer could be reproduced in the industry. Interestingly, the most cited papers were among the least reproducible!

      I do agree that it is difficult to draw the line of when “wrong” is too wrong to be “acceptible”.

  7. I don’t have a problem with retracted articles being cited – as long as it’s acknowledged they’ve been retracted. Sometimes it’s important to be able to cite a study in terms of describing the previous literature and progress of thought in a field. Not being able to cite a previous paper seems to be going way too far.

    If citations of the articles could contain ‘retracted’ written right there in the references/bibliography section of a paper then I think that would likely solve many of these problems – people could continue to cite them, but if they did so without including ‘retracted’ in the bibliography then this could alert readers and editors that the author might be making unsupported claims or are not up with the literature, likewise the reader would be aware that the author is citing retracted research simply by looking at the references list.

    There’s a few recent articles which I think push this question of when an article should be retracted. For example, the study by Li et al in Science showing large scale modifications of DNA to RNA coding I think we could now say was almost entirely due to methodological errors, and should probably be retracted. The other paper is the study showing incorporation of arsenic into DNA. Both of these papers appear to be due to methodological issues and thus make unsupported claims. The most worrying aspect in my opinion is the reticence of the journals involved to seriously look at retraction – in the case of the Science paper one of the key follow-up articles had to be published in another journal! This is seriously a huge red mark against the scientific credibility of Science as a journal in my opinion.

  8. So what should happen to scientific papers that are proven wrong?

    Nothing.. This is to confuse data and conclusions. A paper’s conclusions may be proved wrong but still have accurate data. A paper should only be retracted if it contains falsehoods.

    1. “A paper’s conclusions may be proved wrong but still have accurate data.” — Completely agreed. In fact I think Science must strive to publish raw data and all possible complementary information. And readers (scientists) must critically read papers in their field instead of merely citing the ones which help them build their views. Why cant I cite an imprecise paper for whatever reason? Who will label them as irremediably imprecise? Must only “good” papers be cited? One has to have critical thinking in Science, and showing data and discussing possible and actual mistakes is al about inquisitive thinking. I cannot tell what is right, yet I can make my personal guess about what looks more reasonable only when seeing a set of many (flawed) studies. Killing the wrong will never make right.

    2. But what is “accurate data”? Pauling’s data may be “accurate”, but the analysis of the data, which creates new data, was wrong.

      Similarly, I know of many papers that have accurate raw data, but by failing to correct for a known experimental artifact, the data as used is wrong.

      So, what do we do with that? Is it really “accurate” data? Or does the failure to correct for the artifact make it a falsehood?

      1. To be honest, Pauling’s paper is not a good example for this argument. To a physical chemist, there’s a clear difference between the raw data (x-ray diffraction patterns) and the chemical structure. The diffraction patterns are what we measure. Chemical structures are always a model of reality. They throw away much of the underlying orbital information, and are static representations of a dynamic system.

  9. “what should happen to papers such as Pauling’s that are proven to be wrong, knowing that they continue to be cited as if they had no significant flaws”

    This is the fundamentally deeply flawed assumption underpinning the whole notion that papers should ever be retracted just for being wrong. What would make you think that anyone citing a paper would automatically be doing so “as if they had no significant flaws”? To demonstrate the absurdity of this notion, and the huge disservice that retracting papers would do to scientific progress, I looked up the most recent citation of Pauling’s paper that I could find, which is “Direct observation of the formation of DNA triplexes by single-molecule FRET measurements” by Lee et al., published in January this year. In the first paragraph of their introduction they say:

    Prior to the double helical structure of DNA proposed by Watson and Crick in 1953 [1], Pauling and Corey proposed a triple helix as the structure of DNA [2]. Although the original proposal for the basic structure of DNA turned out to be wrong, the existence of triple helix was reported and confirmed later [3].

    So how about that. Pauling was right. His model is not how DNA works but the molecular structure he proposed does occur. Lee et al. cite the paper, succinctly and accurately state what it got wrong, and what it got right.

    Now, would retracting the Pauling paper have a) helped or b) hindered Lee et al in their scientific work?

    1. What would make you think that anyone citing a paper would automatically be doing so “as if they had no significant flaws”?

      Good question. This:

      The 1999 study found that the retracted articles received more than 2,000 post-retraction citations, with less than 8% of the citations acknowledging the retraction in any way. Preliminary examination of the present data set illustrates that continued citation remains a problem. Of 391 citations analyzed, only 6% acknowledge the retraction.


      1. That’s a very interesting article. But Pauling’s paper is in a different category, surely. It hasn’t been retracted, and I’ve found that one citation to it clearly describes what was incorrect and what was correct. It is not being cited as if it had no significant flaws.

        And if people cite retracted papers without even noting that they were retracted, then the proposed solution to the non-problem of papers turning out to be incorrect clearly wouldn’t work!

    2. Ah, but the structure as proposed by Pauling does NOT occur! Yes, there is such a thing as a triple helix, but it has a distinctly different structure than the one Pauling proposed. It is true (as I noted on the previous thread) that the suggestion of a triple helix, without all the details that were wrong, may have helped others to contemplate the notion at the time when the double helix was the prevailing paradigm.

    3. As noted before on the Linus Pauling thread, these are two completely different models of a triple helix. The real DNA triple helix consists of a B-form double helix with an additional strand wound around it. It retains all the structural components of DNA, and is still an acid. Pauling’s triple helix did not explain the x-ray diffraction patterns, is not an acid, and would spontaneously dissociate under physiological pH. The only conclusion this allows is that Lee et al don’t know jack all about molecular biology/organic chemistry (and can’t use Google/wikipedia), but purely base their statement on “triple helix” = “triple helix”.

  10. I admire your work, keep it up! But “retracting” works because they are later declared erroneous is absurd. Would you retract Newton’s Optics for his well-described errors, including experimental fudge-factors, or Maxwell’s equations because they assumed the existence of an aether? What about Einstein’s greatest mistake, introducing the Cosmological Constant, that is now quite fashionable? The present system assumes
    Published (by a “peer reviewed” journal) = Truth, but Retracted=False. Where fraud is concerned, ok, but
    for science in historical perspective it is nonsense.

  11. What about adding links (extra “references” perhaps) to subsequent papers that demonstrate errors with the article in question.

    It is no longer the 1950s, when this proposal was technologically impossible (or very severely impractical).

    1. Databases which list citations provide this functionality, although obviously they don’t pass judgement on whether the citing papers proved or disproved or simply commented. I think it would be a massive mistake to try to take that judgement out of the hands of individual scientists. Can you think of anyone you’d actually trust to make that judgement on behalf of the scientific community?

      1. Huh? Who’s making judgements? I’m asking for links to not just papers that refer to the one in question, but specifically the ones which claim something is in error; so that I may be more time-efficient about coming to my own judgement.

  12. I think retraction should be reserved for cases of plagiarism, duplicate publication, and fraudulent data. Retraction should be a punishment for intentional wrongdoing, a means of denying credit to those who attempt to get their publication credit by underhanded methods. Being honestly wrong should not be a punishable offense.

    But retraction should not mean expungement from the record. We should preserve the record so people can use the record as evidence for later investigations. Otherwise, we are living in a 1984-like world in which truth is always being rewritten to suit the folks who are currently in power. How would an investigator develop a body of evidence showing that someone was a serial plagiarist if the papers were always removed from the web as soon as the plagiarism was discovered? (Hmm. There would be a niche for a company to take snapshots of the entire web every second of every day, to preserve evidence that might later be removed.)

    Retraction should mean the red-letter stamp across the on-line version to notify readers that the item fails the test of good-faith inquiry honestly submitted and honestly accepted according to the rules. I wonder whether even whoopses like accidental publication should be expunged from the record. Perhaps a red “whoops” and a short explanation would be sufficient.

    A more ethical and rigorous review system should be developed to prevent the publication of papers that get into print only because of an author’s reputation or that get into print despite detectable defects (fraudulent data, plagiarism) that should have been cause for rejection. Retraction should be safe, legal, and rare.

  13. As being new to research and publishing scientific articles, I’m appalled by the notion that an article/hypothesis deemed, at some point in the future, incorrect might be subject to retraction. When (ethical) scientists conduct a study and then draft a manuscript, they are reporting what they did, what they found, and what they think it means. Simple as that. As long as they report those three things accurately and honestly, that manuscript (assuming it’s accepted at a journal) should be immortal. If future studies result in the consensus that that manuscript was wrong in “what they think it means,” well so be it. Over time, scientific attitudes, biases, paradigms, and trends change (just ask an ecologist). But how does being-wrong-in-hindsight warrant a measure that puts a manuscript on the same shelf as studies retracted for falsifying data? Those two scenarios are night-and-day different.

    Here’s the advice I would give a budding scientist–which, frankly, I still am: If you’ve discovered something that you think is interesting and novel, put it out there. Put it out there accurately, honestly, and completely. Go ahead and make whatever conclusions you want based on your data. As long as your results are accurate, honest, and complete, I’ll make up my own mind about whether your conclusions are supported.

    As a side but related note, where do Pauling’s claims about vitamin C stand? Based on a read of the Linus Pauling Institute website at, it seems to still be a very open question as to whether vitamin C is as beneficial as Pauling claimed. In other words, there are conflicting studies–so which of those studies should get retracted (because they are wrong) and which should be allowed to stand? How should we decide which are wrong and which are right? Is it too soon to tell? Should we wait until future studies consistently show only one result or the other?

  14. Papers get retracted for:
    a) Deliberate misinformation.
    b) DIscovery of honest error that invalidates the data.

    In an online, what I’ve often wished for is that if a paper has been published, that credible critiques/ refutation get attached to it, so that one need not hunt all over the place for them. Of course, it’s not good enough just to set up a blog-like structure, there has to be some editorial selection.

  15. It is often necessary to assess the totality of evidence, even when it is pointing in different directions. Let’s say hypothetically that a meta-analysis is performed of 25 clinical trials. Three of them are clearly contradicted by the result of the meta-analysis. Retracting those and removing them from the records would make it impossible to conduct a new meta-analysis incorporating all reported studies. Far better to leave them in the record!

    I agree with those who have already argued that scientists can, and must, use their own judgement to determine which papers should be cited.

  16. Who is supposed to take the responsibility for making such a weighty determination? I am drawn to journals that allow published comments or discussions of their published papers. Let another individual author take a crack at explaining what is wrong, and get personal credit for his or her own contribution to collective understanding. Such a journal identifies the discussion with a title of “comment on …” or by placing it in a regular “discussions” section, and recommends it as a related paper to readers who seek the original on its website.

  17. I’m appalled at the notion of retracting “papers that are proven wrong” (sic). This premise reflects a profound misconception about the nature of science and its object, the scientific knowledge. The truth in science is always temporary and it is never absolute.

  18. The correct answer (i.e. #1) is rather soured by the way it’s qualified (whoever set up the options didn’t seem to think it was necessary to qualify the other options!).

    “Wrong” papers should be left, not because “well-informed scientists will know and not cite them”, but because anyone (scientist or “layman”) that has the most minimal interest in the subject of the paper will surely make the effort to seek out the current scientific status of the field. Every scientific field has several recent reviews that cover the defining science of the subject. “Wrong” papers have valuable historical interest, and in any case may turn out not to be so “wrong after all.

    Papers should be retracted as a result of identification of “bad faith” activity on the part of the authors (and editors); i.e. fraud, plagiarism, or because the authors themselves have subsequently identified fatal problems with the work.

    Hunting down and retracting old papers that are subsequently shown to be wrong has a rather creepy air of Inquisition about it. One can imagine committees tasked to do this seedy job, and the delightful opportunities for political interference in the scientific record as papers that don’t conform to some particular agenda are tageed for oblivion… no thanks!

  19. My recommendation: Teach people to be informed consumers of research, and then the problem resolves itself. They will know how to find the latest, best evidence and then we can leave the scientific literature intact.

  20. I think that such paper should not be retracted because it is a model, a hypothesis, which will be subsequently tested and could be proven not to fit experimental data, that is how science is supposed to work. If anything needs to be done it could be to provide links to papers that refute the model when it is clear cut enough, like in Pauling’s case, that the model does not match experimental data.

    1. “If anything needs to be done it could be to provide links to papers that refute the model when it is clear cut enough”

      I’ve seen a number of posters suggest this. They’re called citations. You’re all suggesting a system that will be exactly equivalent to the current citation system, with maybe a few tweaks. Two parallels kinds of citations will do nothing but cause confusion.

      I can already see the confusion: “we only touch on their work, so do we use a traditional citation, or do we cite their work as a new-fangled-lightweight citation?”

      And the politics: “we don’t like that group much, so we’ll use the new-fangled-lightweight citation that doesn’t get counted in the impact/influence scores, even though we pretty much agree with them.”

      Any system that provides “links to papers” is a citation. If you have experiments that refute a work, write a paper or a review, and then cite the original work. In on-going research, there are **very few** cases when things are “clear cut enough” to warrant some kind of disclaimer, in big, red letters. We also have a mechanism for that: editorial expressions of concern.

      Look. If the paper is important to you, you **must** look at the work that cites it, and you must make your own evaluation of the paper and its citing work. It’s called _scholarship_. It’s what we’re paid for.

      1. But some papers are not designed for scientists in the field, but to be promoted to the general public in the short term. Sometimes the body of the paper is complex, but wrong. Sometimes the body is OK, of minor import, but unsupported messages get stuck into the introduction and conclusion, and those qet quoted.

        1) They may be so silly that nobody in the field will bother writing a refutation.
        2) Even if they do, it may be that no credible journal will publish it.
        3) If they do publish it, it may be fairly difficult for the general public to find, and if they do find it, it may be paywalled.

        I don’t think most disciplines face this, but but it does happen in some.
        It certainly happens in climate science. See for example Pals, but there are various other examples, such as papers by some of the authors mentioned in RC. For instance, the paper by Beck was especially bad, but it still gets referenced by people who like its message. It’s been used in books (not for scientists or students, but for the general public).

        Again, all this is *not* normal science, but it would certainly be nice if refutations were quickly and easily findable without extensive efforts.

  21. John Mashey – I am not sure if anybody else is advocating here your Truth Committee with a hint of conspiracy theorism. The question is what to do with papers that are wrong, not with papers that some people believe are wrong.

    And yes, even if science cannot find what’s true, science can find what is not true, such as a triple helix for DNA.

    1. Is there an objectively definable difference between “papers that are wrong” and “papers that some people believe are wrong”?

    2. No, it’s often straightforward to determine that a paper is absolutely wrong. John Mashey’s example of Beck’s paper is a good example; a rather nasty piece of pseudoscience prepared and accepted in bad faith apparently in order to misrepresent a particular scientific field.

      Should it be retracted? Well, it should never have been published, but since it serves a particular purpose neither the author nor the editor is likely to accept a retraction.

      As usual, consideration of “good and bad faith” is a helpful guide for addressing the particular modern problem that John Mashey describes, which is quite different from the normal considerations of science publishing, and of what to do with “wrong” papers (answer: nothing [*] other than to ensure that the contemporary record continues to be robust).

      So anyone with a “good faith” interest in understanding late 19th and 20th century atmospheric CO2 levels who happens across Beck’s paper will recognise quite quickly that it’s a ludicrous crock and disregard it in favour of the abundant science that pertains on this subject. Those who address this aspect of the science in “bad faith” are likely to find in useful in supporting a particular non-science agenda. That’s simply a sad fact of modern life…there are quite a few papers out there for which it’s similarly difficult to escape the conclusion of bad faith in support of non science agendas (we could list them!).

      [*] In my opinion no piece of work published in good faith should be retracted unless the authors themselves chose to do so. If it turns out to be wrong then this will become abundantly clear to anyone with genuine (good faith) interest in the subject,

      1. Be very careful when you make categorical claims like “it’s often straightforward to determine that a paper is absolutely wrong.”

        There are plenty of historical examples where people thought a work was “absolutely wrong,” when it was not wrong. Boltzmann and Heaviside both had problems publishing work that was ultimately shown to be correct. _Structure of a Scientific Revolution_ details several cases where science was pushed backward by social factors.

      2. I agree, Beck’s CO2 work was laughable nonsense, presumably published with the deliberate intent to mislead the weak-minded. Nonetheless I think he’s already in a grey area because I believe his data were actually reasonably reliable local measurements of atmospheric CO2. He dishonestly passed them off as global measurements but the data themselves might nonetheless be useful, to someone. So, it might not be reasonable to describe the paper as “absolutely wrong”, though its conclusions certainly were.

        Retracting papers for being fraudulent, unethical, based on objective reproducible errors, or for being fundamentally unreproducible seems to me a good thing and relatively easy to do transparently. Retracting papers for being wrong just seems like a minefield of epic proportions, impossible to do objectively, fairly, transparently or reliably.

      3. I couldn’t agree more sfs. I am referring to the rather unsavoury practice of publishing knowingly wrong stuff in support of non science agendas.

        In the example that John Mashey referred to and of which I’m familiar, a paper was published that compiled various measures of CO2 measured in various cities (Paris, London, Belfast, Poona, Vienna, Copenhagen, Giessen etc) in the late 19th and early-mid 20th century and where, not surprisingly some high values of CO2 in the air were measured. The author of the paper (Ernst Beck) asserted that these measures showed that atmospheric CO2 levels underwent massive swings up and down in the early 20th century (by amounts equivalent to the release and reabsorption of around 1/3 of the terrestrial biomass in the space of a few years!), and that these studies have been ignored by climate scientists, and that the contemporary rise in CO2 is of little significance in the context of natural variation in the 20th century.

        It’s a nonsense paper in support of the misrepresentation of this aspect climate scence. It’s particularly nasty since it insinuates malpractice on the part of scientists that attempt to make robust assessment of atmospheric CO2 levels, and because it misrepresents the understanding of the scientists involved in making these measurements who knew quite well that their very high [CO2] measures and massive fluctuations were a result of their location near industrial centres.

        So the paper is wrong. It’s difficult to escape the conclusion that both the author and editor of the journal know this. That’s the class of paper that I’m referring to that is categorically wrong. Sad to say there are a number of papers like this in the scientific literature.

      4. Note that I said:
        “Again, all this is *not* normal science, but it would certainly be nice if refutations were quickly and easily findable without extensive efforts.”

        I didn’t suggest retracting Beck. I just wished that if someone referred to the original, that a reader (who was not an expert), could go there, read Beck and then also see credible comments/refutations attached, not scattered around in blogs, amidst mixtures of citations that accepted it uncritically. In print media, this function has long been filled by publishing notes later, but that still disconnects them from the original. This seems unnecessary in an online world.

  22. a more interesting point for me is that if the retracted paper is the main work of a phd thesis, shall the phd be retracted as well?

    1. Not if, hypothetically speaking, the Dean of Science had been alerted to a practice of falsifying data before the PhD student submitted their thesis and took no action.

      Many Universities have an unofficial policy that scientific falsification is acceptable provided that sufficient records are kept to make the fraud undetectable. A PhD student can only adapt to the prevailing scientific culture of an institution.

      1. “Many Universities have an unofficial policy that scientific falsification is acceptable provided that sufficient records are kept to make the fraud undetectable. ”


      2. Well, my experience only Dr Mashey. Of course I was not so fortunate as to work in a field with such a stellar track record of successful modelling as Climate Science. I imagine that the dynamics of experimental science are somewhat different to those of fields that are more observational in nature.

        Retractions to my mind represent the weaker animals, the stragglers that are rounded up by the large carnivores. The more intelligent and systematics cheats don’t get caught and will never get caught – unless a system of anonymous double-blind replication was instituted.

        I have had the good fortune to study with some close attention the reactions of academics to whistleblowing and more observations have been the higher ranked the university or the high the rank of the academic within a university, the more hostile they will be to whistle-blowing. Its only anecdotal observations, but I think it is sound. The more junior an academic or the more obscure the university to more likely they are to sympathise with whistle-blowing.

        I don’t think PhD students should have to carry the can for following the dominant culture of their institution. Especially when the penalties that can be imposed for not cheating can be severe.

      3. “Many Universities have an unofficial policy that scientific falsification is acceptable provided that sufficient records are kept to make the fraud undetectable”


  23. RW

    As I said, only in my experience.

    I am sure you have successfully carried through a number of misconduct complaints in American universities – and my experiences are simply the exceptions that prove the rule.

    1. ‘“Many Universities have an unofficial policy that scientific falsification is acceptable provided that sufficient records are kept to make the fraud undetectable. ”

      Sorry, I should have been more explicit. I wasn’t commenting on the claim, just trying to understand the meaning , specifically “sufficient records are kept to make the fraud undetectable.”

      That just seemed odd wording, if you meant:
      1) A completely consistent set of falsified records were provided.
      2) All the originals were destroy so there would be no contradictions.

      since both parts seem required, and if anything, 2) would seem more important.

      1. Dr Mashey,
        What works varies from field to field. So in biological/medical research this would be a fairly crude example:

        Suppose you were testing a compound that you believed had anti-cancer, ie anti proliferative activity. Now lets say you had some promising initial results or otherwise some investment in getting a positive outcome but your core technique, growing cells in the presence of the agent and measuring cell number using a colorimetric assay, was showing no difference from the controls.

        All you need do is seed the wells which will be treated with the anti-cancer agent with fewer cells than those of the controls and then your colorimetric assay read-out will look perfect. You take the print-outs and paste them in your lab-book. The fraud is undetectable.

        Even if someone was to later to repeat your experiments and find no effect, that in itself would not even come close to the level of proof required to demonstrate scientific misconduct.

        So neither 1 or 2 is necessary in completely controlled experimental settings, you just have to undermine your own experiment.

        That is a pretty crude example, more often complex psychological mechanisms will take place, like avoiding to the correct controls or eliding some factor that wouldn’t be obvious to a reviewer.

      2. lgr:
        In the case you describe, this would seem to be purposefully-skewed experimental design, so that no records ever existed. That indeed would be very hard to prove.
        So, was your comment really an assertion that many universities were encouraging people to cheat, but only if they did so well enough that their tracks were covered? Or did you mean something else?

        It is nontrivial to ascertain widespread *actual* behavior regarding misconduct issues, because the outcomes of such are rarely public, with relatively few exceptions,a s I found when hunting for exampels when writing Strange Falsifications …. University results are hard to find, except for occasional high-profile cases like that of Ward Churchill.

        ORI of course publishes cases, in detail, naming names, but of course, they (correctly) do not publish alleged misconduct that was not upheld, so one sees clear cases, but the dividing line is less clear.

        NSF OIG publishes reports, such as 2012 or 2011, which are less specific, but still fairly useful.

        Unsurprisingly, from my experience {universities, journals, publishers, editors} vary strongly in their handling of alleged problems, ranging from
        prompt, decisive action where they point at a written process and how long they expect it to take
        to the other extreme:
        will not even reply to repeated, well-documented complaints

        Like other large organizations, behavior inside a university can also vary widely.

      3. Dr Mashey
        “So, was your comment really an assertion that many universities were encouraging people to cheat, but only if they did so well enough that their tracks were covered”

        Had I meant that many universities encourage people to cheat I would have said many universities encourage people to cheat. What I said was this:
        “Many Universities have an unofficial policy that scientific falsification is acceptable provided that sufficient records are kept to make the fraud undetectable. A PhD student can only adapt to the prevailing scientific culture of an institution.”

        You could call it skewing experimental design, but it is a rather ad hoc process that takes place in the scientist’s head – so while theoretically is experiment design, that is a little grandiose a term. Another practice I have seen is spiking the control compound you have purchased commercially as a positive control into experimental samples to generate false positives – again undetectable in lab records.

        Most, but not all, people here come from a biology/medical background, so the dynamics of this issue is different from what you are used to with climate science. Climate science is relatively new and expanding, so there are probably jobs and grants for everyone who wants them – the criticisms come from outsiders generally from ideological motivations. Bioscience is still in a growth phase but is also large and mature – which means competition is fiercer. A climate scientist isn’t really all that dependent on the reliability of other people’s findings to conduct their research (comparatively speaking). Whereas in bioscience you will commonly base on research strategies or grant proposals based on findings in the literature. Such that relying on an irreproducible publication can torpedo a whole year of a research project. If you like, climate scientists are operating in a happy collective hallucination, while bioscientists are suspiciously looking over their shoulder wondering if someone is stealing a march on them.

        As a point of comparison I once remembered an interview with African who was campaigning for greater levels of transparency in Africa governance and finance. The interviewer asked him why corruption was so rampant in comparison to European – was it something to integral to African society. The interviewee just laughed and said the thing that separated the two, the only thing, was that Europeans were too afraid of being caught. Ethics in finance, he said, was totally dependent on robust compliance and supervision mechanisms. Self-regulation doesn’t work in finance, and it won’t work in science either. And even in finance, every time we have a credit crunch, a number of shonks get shaken out that managed to operate undetected in the boom times.

        Bioscience has been in a bull market for all our lifetimes and so a level of cheating, although annoying, can be accepted. Presumably the bull market will continue for a long time yet, but it is possible for a scientific discipline to meander off into irrelevancy – such as what some branches of physics appear today – and then the importance of integrity in science might become more significant.

      4. Just to be clear: I’m not a climate scientist, my early academic background was physics. While I’m also a member of AAAS, AGU, and APS, I’ve spent most of my career as a computer scientist/engineer/manager/executive. In computing, the main cheating used to be on performance benchmarks, which is why a few us started SPEC 20+ years ago.

        Of course, as a 10-year Bell Labs guy and later a Silicon Graphics Chief Scientist, I’ve naturally intersected various fields of science and they all have their own issues. At least in physics, truly outrageous results tend to violate laws of physics pretty quickly, and people even publish surprising results while explicitly asking if people see anything wrong (like the faster-than-light neutrinos. A lot of people *hoped* it was true (because it would have been interesting), but expected it was wrong.)

        Biosciences seem much tougher.

  24. Let’s look at an example:

    BCL-2 is an inner mitochondrial-membrane protein that blocks programmed cell-death Nature 348: 334-336 (1990). Cited 3165 times.

    It was shown by three independent groups in 1992 (e.g. Ultrastructural localization of bcl-2 protein. J Histochem Cytochem 40: 1819-1825) that Bcl-2 is not an inner mitochondrial-membrane protein (the only original finding in the Nature paper), and most experts in the field know that it is not an inner mitochondrial protein, but the Nature paper has been cited more than 2,500 times after it was shown to be incorrect.

    By not linking a correction to this paper, Nature does a disservice to its readership.

    1. Errr…there’s Gotow et al in Cell Death and Differentiation (7, 2000, 666-674) who also find it in the inner mitochondrial membrane, and so do Motoyama et al (Biochem Biophys Res Comm 249, 1998, 628-636). So, where is it?

  25. The Pauling paper is part of a historical sequence of trial and error until the true (or a better) description of reality has been found. The fact that Pauling was “vain” and wanted the glory of being first just points up that he was just as human as the rest of us.

    Best to leave the written record alone and document the search for DNA in the books and material we use to train our students – showing how haste and vanity can lead to significant errors.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.