Should science put up with sloppiness?

labtimes513That’s the question we pose in our newest column in LabTimes, based on some recent cases we’ve covered:

The implication seems to be that as long as researchers can pass off their mistakes as sloppiness, rather than intentional misconduct, they should be forgiven and carry on their work. We’re with that logic, to a point; after all, we’ve argued before that due process is much too important, no matter how apparently damning the evidence is. And as long as corrections and retraction notices are detailed, telling the whole story, science and the public are served.

But we know that’s not the case for many notices and corrections, and sloppiness isn’t such a good idea. We’ve seen “mega-corrections” that make us wonder why particular papers weren’t just retracted – just as some of the “it was just sloppiness” explanations make us wonder exactly where scientists draw the distinction between clumsiness and misconduct.

We welcome your thoughts.

18 thoughts on “Should science put up with sloppiness?”

  1. Without too much selection pressure post publication, then this may get worse. PubPeer has now got comments on over 1000 papers (according to a recent PubPeer Tweet), and some of these may relate to sloppiness. However, there is at present no negative impact. Same goes in universities, where poor supervision may not be dealt with effectively.

    There is also, as you note, no demarcation between sloppiness and misconduct. Sloppiness/mistake/error are used as euphemisms for misconduct, so until we start to be precise in our use of language, this will remain very muddy water.

    1. I agree totally. Phrases like “errors were inadvertently introduced during figure assembly” appear often in correction notices—almost as frequently as the phrase “these errors do not alter the main conclusions of the paper”. I believe that it is too easy for scientists to hide behind this excuse when fraudulent image manipulation is detected. As I’ve said before, sloppiness should be fairly evenly distributed between those mistakes that strengthen the story, those that are neutral, and those that decrease the impact of the work. Only fraudulent actions will universally help a paper make its point. As humans, we all have the capacity to make mistakes, so we need to be flexible in how we deal with issues of sloppiness. In my opinion, that in turn must be balanced by a willingness to be less tolerant of fraud. I think we also need to specifically mentor students (and lab heads!) on the fact that it is just as important to present your data in a rigorous and honest manner as it is to design your experiments and collect your data in the first place.

      1. “As I’ve said before, sloppiness should be fairly evenly distributed between those mistakes that strengthen the story, those that are neutral, and those that decrease the impact of the work.”

        Initially yes, but I disagree this would be true of published papers. Mistakes that decrease the impact of the work will tend to reduce the chance for the work to be published (or even submitted in some cases). Therefore one would expect the distribution of errors in published works to be biased towards those errors that strengthen the story.

  2. why would sloppiness be allowed in any research papers, in civil and criminal law cases are thrown out for such ( transposing dates,, ie: 1998 that should have read 1989, or names “John Doeis” that should have read “John Doe is”, an address that reads “6904” that got typed “6940”, the list could go on. In civil law if a contract for a loan is written with the numbers typed wrong, i.e.: a loan for “54,981 dollars” is typed “54,198 dollars” guess who makes up the difference, (that is why the number is now typed out “fifty four thousand nine hundred and eigthty-one”) , why not in the world of research?

  3. I think the priority has to be getting the scientific record right. Reprimands and forgiveness are separate issues for institutions to deal with.

    COPE says:

    Journal editors should consider retracting a publication if:
    • they have clear evidence that the findings are unreliable, either as a result of misconduct (e.g. data fabrication) or honest error (e.g. miscalculation or experimental error)

    Journal editors should consider issuing a correction if:
    • a small portion of an otherwise reliable publication proves to be misleading (especially because of honest error)

    In other words, the primary deciding factor is the scientific impact of the problems that should determine whether a paper is corrected or retracted, not the mechanism by which they occurred (or by which they can be proven to have occurred – it is often impossible to be sure in such cases) .

    Nature’s own commentary from 2010 stated ‘… if most of the figures are problematic, we will strongly urge the authors to retract the paper, even if they were cleared of misconduct and even if the paper’s main conclusions have been verified independently by other labs. The logic is that the published paper did not accurately reflect the data as they were collected.’

    In reality there is often a massive fudge. The recent McGill case was a classic example (; they managed to produce corrections despite two figures in the Nature paper being “intentionally contrived and falsified.” One of those figures was duplicated in a PNAS paper, which also contained an image that had incorrectly labeled some proteins.

    Another less publicised example involved a Panel Report reading as follows:

    ‘Given that the Panel does not believe that the allegation of research misconduct has been proved…, it is
    felt that retraction of the Paper would be unwarranted. However, the Panel does believe that there is reason to doubt the reliability of the Method, and consequently to doubt the validity of some of the results
    presented in the paper and researchers ought to be aware of these concerns.’

    End result – another megacorrection in a Nature family paper likely to be widely ignored. Nature, sadly, don’t practise what they preach and hence share some of the responsibility for the terribly low standard that exists for other journals to follow. Bottom line – the journals hate to produce retractions – it makes the journal look bad in the short-term. But in the long-term their reputation suffers the more this practice is allowed.

    1. Perhaps I am way off base, but I am going to offer a view more in line with Scott Allen’s post and suggest that the scientific community should consider some type action in these types of cases. Sure, anyone can make a mistake here or there and God knows I have made a few (albeit minor ones) in my own research, but when a lab starts to show a pattern of sloppiness, perhaps authors who follow that lab’s research should consider treating this type of situation in the same way we might treat an automobile repairman if s/he were to show an analogous pattern of sloppiness in repairs made to our own automobiles. In the latter case, wouldn’t we eventually change mechanics and caution our friends about the poor quality of that individual’s work? Perhaps, we can do something similar in science, such as cautioning our readers about a lab’s reliability without going out of our way in doing so, but doing so in manner that simply shows the facts. Thus, if there is a pattern of sloppiness in a particular lab whose findings are relevant to our own work, we could alert our readers as follows:

      “W’s research shows that x causes y. However, those results were later corrected to indicate an effect weaker than originally reported (citation). On the other hand, similar problems from that lab have been reported twice in two different set of experiments (citation and citation)”.

      I believe that an argument can even be made that, if we were to become aware of a major problem of sloppiness in a lab, we have a professional obligation to make this matter known to those in the relevant community. A mechanic that tends to make mistakes when repairing/replacing the brakes of an auto is conceivably putting people’s lives at risk. So are scientists and engineers who work on analogous research areas.

      Caveat emptor!

      1. How best can science-fraud be dealt with we ask ourselves…..What would make science fraud hunters happy?

        As Conan the Barbarian once said, quoting Temujin Khan, of what is best in life “…..crush your enemies, to see them fall at your feet — to take their horses and goods and hear the lamentation of their women. That is best….”

        I jest, of course.

        The problem is not just one of the science-fraudsters – it is one of those who aid in covering up science fraud. If they are dealt with, the fraud disappears very quickly.

        1. “The problem is not just one of the science-fraudsters – it is one of those who aid in covering up science fraud. If they are dealt with, the fraud disappears very quickly.”
          YES, YES, YES! Couldn’t agree more!
          If in cases of misconduct the Editors/Institutions/Publishers/COPE were doing what they declare that they’ll do (in their nice-looking Frameworks/Guidelines/etc.) the fraud will disappear. However, in most cases of misconduct what these parties do is not only ignorance, but very active cover up. Therefore, they should be (and one day they will be) hold accountable. Transparency Index which show whether these parties Do_the_Right_Thing in cases of misconduct can fix the mess. Good example for Doing_the_Right_Thing is the case of Milena Penkowa and the University of Copenhagen, where, according to Marco (October 6 @ 3:08 pm), “The university paid back around 2.1 million DKK (about 380,000 dollar), Penkowa herself returned 250,000 DKK.”
          Brilliant example for other institutions/countries to follow!

      2. Here’s a question in answer to your question: Should hospitals put up with sloppy surgeons? Same potential consequences.

      3. Miguel Roig said it better than I could have. I have long said that making a mistake is not unethical, but there is a point when anyone who makes mistakes over and over again – especially the same kind of mistakes – is incompetent at best and should not be involved in that kind of research. The difficult part is determining when that point is reached and how hard colleagues should work to bring up the level of competence before giving up.

  4. As long as major decisions are made on the basis of uncensored research findings and scientists recommendations , there should be no place for sloppiness in science.

  5. Sloppiness apparently is par for the course for many researchers. Problems intensify, however, when researchers devote themselves to pretending that obvious errors do not exist, and refuse to correct the scientific record. I’m talking about serious and obvious factual errors.

    Such an episode becomes all the more unsettling when such misrepresentations turn out to be supportive of unusual business revenues, including stamping particular brands of sugar and sugary products as Healthy, while at the same time promoting obviously false information in an important public debate, in this case on the origins of obesity – together with type 2 diabetes, the greatest public-health challenge of our times.

    Readers, I am arguing near and far for the correction or retraction of the University of Sydney’s extraordinarily faulty Australian Paradox paper, (self)published in the MDPI journal Nutrients while the world-famous lead author was operating as the “Guest Editor”.

    Please be very critical of me if you spot any serious factual errors in my analysis (you won’t). In particular, let me know if the data in any one of Figures 1, 2, 3, 4, 4a, or 5 (all cut-and-paste reproductions of the authors’ own published charts) trend down not up, as the authors claim!

  6. There are all shades of grey between honest mistakes, sloppiness and misconduct. Stephen Lisberger made some very valid points in this context recently –
    “Finally, we should talk about misconduct more often and more deeply. Subconscious and conscious misconduct needs to be discussed in lab meetings, faculty meetings, ethics courses, and national meetings. By putting fraud under the light and developing a strong structure for its detection, we can reduce it dramatically, even if we will never be able to eliminate it altogether. And we need to remember that although fraud may be more prevalent than we think, most scientists conduct their research irreproachably. As always, we need to be careful not to assume fraud has occurred just because there’s been an accusation. Investigation often reveals that an error, a misunderstanding, or nothing at all has occurred.”

    1. “There are all shades of grey between honest mistakes, sloppiness and misconduct. ”

      Speaking of shades of grey, I have on occasion stuck different gels and blots together when preparing group or department talks, generally because one of the lanes fell over and I didn’t have time to redo the experiment before Monday morning. When I did it always sticks out a mile. Virtually all the photoshopping I have seen here has been artfully done to match backgrounds between different exposures.

      This is not sloppiness, this is art.

  7. I don’t know how to answer this except “I know it when I see it.” At some point, you have to conclude that, even if the problems with the manuscript were legitimately the result of sloppiness, you can’t trust the correction. If they are that sloppy in preparing figures, how do you know the replacement figures are from the right experiments, or that the experiments were even carried out as claimed?

    1. StrongDreams,

      In terms of corrections, my favourite was when the underperforming authors in the slowly inflating Australian Paradox scandal claimed that my correct critique of their awesomely faulty paper is incorrect….incorrect because cars not humans had been consuming a big chunk of the missing sugar via ethanol production. And that would have been a rather strong rebuttal. But, awkwardly, sugar is not used in ethanol production in Australia.

      Keeping score, I claimed four serious errors, and the unreliable authors stepped up with a fifth! Yes, these are “scientists”! Indeed, one is a world-famous scientist. So my question is: when does persistent sloppiness become simple incompetence or worse?

      Some of the University of Sydney’s shenanigans were documented at the time by widely respected Australian journalist Michael Pascoe:

      Readers, if you operated a business that exists in part to charge food companies up to $6000 a pop to stamp particular brands of sugar and sugary products as Healthy, would you feel comfortable self-publishing and then defending as flawless a spectacularly faulty paper that seeks to (falsely) exonerate sugar as a key driver of obesity?

  8. Re: Hotsy Potsy
    I actually think there is an association between sloppy / poor science and fraud. Producing fraudulent conclusions is much easier if you have a poor system with a dreadful signal : noise ratio. That makes it much easier to drop a few observations here (those experiments didn’t work…) and ignore a few others there (well something funny happened in that tube, I’m sure…) and hey presto, you have the dose-response relationship you want and a high-impact paper that should propel you to your next grant.

    Robust systems with good signal:noise, accompanied by fixed criteria and controls for assessing the reliability of experiments, make it much harder to start on the slippery road to outright fraud.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.