Does focusing on wrongdoing in research feed mistrust of science?

There have been a number of thoughtful stories and opinion pieces on scientific fraud recently. There was Brian Deer in the Sunday Times of London last month. Paul Jump, at the Times Higher Education, later that month looked at the lessons of one particular case. Alok Jha, of the Guardian, took on the issue last week.

Yesterday, in a Knight Science Journalism Tracker post on a symposium on communicating science with integrity, Deborah Blum suggested science writers “need to be increasingly aware – and wary – of these issues in academic publishing,” while noting that they are a “minority report.” (And thanks to all of those pieces for the mentions and kind words about Retraction Watch.)

Some scientists, of course, agree, if the steady deluge of tips we get from working researchers is any indication. “We must be open about our mistakes,” wrote Jim Woodgett, of the Lunenfeld Research Institute in Toronto, in Nature earlier this month.

But not everyone thinks highlighting misconduct is such a good idea, and they’ve let us know publicly and privately. As we note in our most recent LabTimes column, which went live today:

The argument goes something like this: Science is self-correcting, so it takes care of its own mistakes. And scientific fraud is rare, so focusing on misconduct gives a distorted picture of research that will only give ammunition to critics, who want to cast doubt on subjects such as climate change and vaccine safety.

We have some answers to that argument, and a sense of what we think is really fueling mistrust of science, under the title “Stop Shooting the Messenger.” We hope you’ll give the column a read, and come back to let us know what you think.

28 thoughts on “Does focusing on wrongdoing in research feed mistrust of science?”

  1. It is certainly a catch 22. No one wants competitive federal grant dollars going to fraudulent researchers, and while rare, they do exist. And yet it is true that the more press the topic receives, the more the public distrust in science and the tighter the research dollars get.

  2. Ivan, such reporting can be important if it’s a) reputable and b) designed to promote positive change where change is needed. In fact, I’d venture a number of academic researchers would feel the same way. After all, keeping/promoting strict research standards is beneficial to researchers upholding those very standards. But conversely, some articles that border on sensational can be dangerous and prey on existing fears (that are at times unfounded) around the clinical trials process.

    1. In reply to Justin September 17, 2012 at 3:07 pm

      “But conversely, some articles that border on sensational can be dangerous and prey on existing fears (that are at times unfounded) around the clinical trials process”

      Do you have any particular articles in mind? Otherwise it is a bit if a broad brush.

    2. It is not only the “process” of the clinical trial that is the problem. It is that many clinical trials are set up to demonstrate a point (like in, “this antidepressant is better than placebo, so the FDA will approve it”) and thus is conducted in accordance to obtain a positive outcome. Didn’t get one? Don’t fret about it! Throw the trial away, and another one will come along that will be useful. Though industry in its different expressions (from manufacturing toys to determining parameters of metal fatigue) forces results that will put whatever device they manufacture into the market, in clinical trials you engage a lot of professionals that are supposed to be the safeguards of the public health but in many instances, distort (“torture the data” as they like to say) to the point of fraud and the product makes it to the market, reaps the benefits and a few years later the company pays a fraction of the monetary benefits obtained as penalty for fraudulently promoting an unsafe product. And the academic professionals that conducted the trials have been handsomely paid and then turn around and teach the residents with not a hint of shame.

  3. Uncovering and reporting on scientific fraud can only increase the trust the society has in science. The problem is the poor scientific education the society has and the poor job scientists do to educate the society in science. The denial of global warming is just one of the many ways in which science is distorted to the point of fraudulent behavior. Also, fraud in science is not only produced by individual psychopaths, or anti-socials. There is the fraudulent suppression of negative data by the pharmaceutical industry and the manufacturers of medical devices. There are large groups of professionals that are paid by the industry to lie about the products in the market, producing a fraudulent pseudo-science. In my professional life I have seen every instance of fraud. I definitely believe that the task of scientists is to save science, denouncing fraud at every turn and expanding the concept of fraud beyond the obvious distortion of data.

  4. If we lived in a scientifically literate society, there would be no problem with airing the dirty laundry in public. As it is, we don’t live in such a society, and it is much easier for the public to read about retractions than it is to understand the actual science involved so a distorted picture of science in the minds of many is the inevitable result,

    The solution, obviously, is not to stop reporting on misconduct but to convert society from a scientifically illiterate into a scientifically literate one. But until this happens, the impact will be negative.

    It is worth recalling why scientists cheat – because competition for positions is cutthroat and their career survival is at stake. That’s even more of a problem in times of flat budgets, tight funding and reduced job opportunities. Successfully casting them as cheaters in the mind of the public is much more likely to result in further diminished support for science, even less funding and as a consequence even more cutthroat competition and more cheating.

    1. Allow me to add another reason to explain why some scientists cheat: they are cheaters. You have honest humans and you have dishonest humans in all walks of life. These cheaters would cheat regardless of whether they get a lot of funding or no funding. They’d cheat if they worked in another field. They are cheaters, period. More money will not reduce their cheating, and might even attract the cheaters.

    2. I couldn’t agree more! I hope this is achievable, but I feel that existing societal and political constraints impede this at every turn. How can we be most effective in this endeavor? Outreach?

  5. According to an NIH survey 35% of researchers self-reported fraud. If one ran a store with a 35% theft
    rate they would’t be around for long. The basic problem is that from grant writing to publication the whole system is based upon trust. So when one is found to be untrustworthy they be punished severely.
    Remember the Chinese executed their equivalent of the head of the FDA. We don’t need that extreme,
    but we are too soft on all white collar crime and we all pay the price!

    1. Are you talking about this study? http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1483900/
      The 35% figure is not self-reporting of fraud, but self-reporting of a broader category of misbehaviors, of which the vast majority are:
      * “Circumventing certain minor aspects of human-subjects requirements (e.g. related to informed consent, confidentiality, etc.)”
      * “Overlooking others’ use of flawed data or questionable interpretation of data”
      * “Changing the design, methodology or results of a study in response to pressure from a funding source”

      The last in particular is dominant, yet unclear what it even means, given that funding sources often legitimately change their requests in response to changing context, e.g.: “Don’t use method X, group Y just showed it’s unreliable” or “I think you should also test for Z, because we’re worried about Z now also.”

      Actual data falsification is self-reported at a maximum of 0.5%.

      Your basic point still stands (0.5% is still far too many for my taste!) but please don’t fudge the data when you’re trying to make your case.

  6. I do not see the negative side of reporting on cheaters (because that is what people who do fraudulent work are). If people know that fraud of any sort will not be tolerated in science, then they can and will feel much more confident and secure in what we tell them. People expect scientists to be smart, creative, and honest. When we show that we expect the same of ourselves and our colleagues, again they are more likely to feel positive about our work and our profession, and also support removing tenure from cheaters, something we should all support. When we let cheaters slide, we do not deserve to be trusted, since we are aiding and abetting bad behavior.

  7. I think RetractionWatch performs a very useful function.

    In my experience, journals and universities show a wide range of credibility in the face of plausible misconduct allegations. The issue is not really that scientists cheat – since has a deserved reputation for being (mostly) self-policing, especially compared to many other entities who routinely propagate untruths.

    At the extremes are behaviors that either build confidence in institutions or wreck it. In the middle are some of the wishy-washy retraction notices.

    GOOD
    Problem reported to (a) journal (or (b)university), they follow their rules, do inquiry (and as appropriate) investigation. If need be, paper is retracted, with a clear notice, and if a misconduct case, inquiry/investigation are handled properly and if appropriate, even made public.

    BAD
    Well-documented problem reported to journal, who essentially ignores it(c) , without even acknowledging receipt, or who (d) lets authors revise papers with no mention of plagiarism, even when the authors were actually the editors of the journal. (e) University manages to ignore about 80 pages of well-documented plagiarism and then (f) denigrates one of the complainants.

    See PDF @ this, which includes examples of all these:

    (a) p.19: Elsevier did fine, even over resistance from its Editor-in-Chief and really strong resistance from senior author, who wanted to just revise a plagiarized paper. The retraction notice was clear.

    (b) p.68 UC Boulder Ward Churchill case (complaint upheld, under complex circumstances)
    p.69 Penn State Michale Mann case: inchoate complaint carefully evaluated, rejected after due process
    p.9 Rice U inquiry on David Scott: cleared in 9 days.
    I’ve been involved with several other examples (not public) where other universities responded quickly and appropriately to complaints, even if the the results were not made public (and I agreed with those decisions).

    (c) p.30 Springer and editors, ignored

    (d) pp.30-34 Wiley. Editors did disappear from masthead, but with no public comment by WIley, but the comments to complainants were basically part of a stonewall.

    (e) pp.16-32 George Mason University. Everything ignored or declared not a problem, except the paper Elsevier had already retracted, and took almost 2 years to reach conclusion on a few pages of text.

    (f) pp.35-40 GMU treatment of complainant Ray Bradley might even rise to retaliation of sorts by the university itself. That is not exactly an encouragement for internal whistleblowers.

    As it is in corporations or government, the real issue is how *institutions* deal with the problems that arise. The individuals come and go, but the institutions remain.

  8. The only reasons for circling the wagons is if there’s something that has to kept hidden in the middle of the circle. Unfortunately, we live in the world so eerily foretold by President Eisenhower:

    […] The prospect of domination of the nation’s scholars by Federal employment, project allocations, and the power of money is ever present — and is gravely to be regarded.

    Yet, in holding scientific research and discovery in respect, as we should, we must also be alert to the equal and opposite danger that public policy could itself become the captive of a scientific-technological elite.[…]

  9. It’s ironic that some scientists don’t realize what a valuable service Retraction Watch and others are performing for them. The best way to lose public trust is to bury relatively small sins until there is a scandal so big that cannot be suppressed and rivets the public imagination.

    A second important service that Retraction Watch and other investigative science journalists are performing is providing continuous examples of how good and intelligent people can veer outside of ethical and proper science practices. These specific examples are much clearer and effective that an occasional abstract presentation on science ethics at a yearly conference.

    Support and offer constructive criticism and information to your watchdogs, don’t ask them to be quiet.

  10. Oh yes I think making scientific fraud public does increase mistrust in science and I do think this is all the more positive. As Science will never stop from being, it has to increase public trust as feedback and this can only make it better, as the frauds were there anyway however new countermeasures would never be applied if they were not exposed.

  11. Science fraud is rare? It seems to be that some rather ‘excellent’ examples have been reported here, from major laboratories reporting major discoveries (Soto on prions, the postdoc of Amy Wagers on stem cells, to name just two). And these have been found only because the fraud was done in a sloppy manner. Really, reusing the same figure or duplicating Western blot bands is as careless as dropping your ID at the scene of a robbery.

    If nothing else, these events suggest that a lot more fraud might be happening, which remains undetected because the perpetrators are a just a little careful and hide their tracks well.

    I don’t see how putting all of this under the carpet helps anyone. It would not help honest scientists, who remain left behind struggling to obtain and publish solid data. Or worse, might start projects based on data out there that is just plain crap (it happened to me once and wasted six months on a paper than then turned out to be made up). And it wouldn’t help the public, whose money goes to waste on made up science.

    Please continue reporting. This website is a great resource.

  12. “Does focusing on wrongdoing in research feed mistrust of science?”
    Does handing out speeding tickets feed the idea that people cannot drive?
    Does reporting rape or child molestation feed the idea that an area is dangerous?

    The answer to those questions is yes, but we must report those things for all the same reasons.

  13. “Science is self-correcting, so it takes care of its own mistakes. And scientific fraud is rare, so focusing on misconduct gives a distorted picture of research that will only give ammunition to critics”
    In other words they say:
    Leave us alone, trust us for everything we do: research, publishing, using/spending your money, investigating the very few bad apples, and continue to give us more public money.
    And, please, stop showing our dirty laundry to the public, as this erodes our credibility and affects negatively our claims for more public money!

    To this I say that:
    They (academia) have all the information/knowledge
    They make all decisions
    They get all resources
    They are self-regulated
    They are self-investigated
    They insist on NO transparency
    They are NOT ACCOUNTABLE to anyone for anything
    And they want infinite public trust
    Even communist dictators did not enjoy this!

    To the “good guys” in academia (still majority, we assume):
    If you want to be respected and trusted, please, get rid of the bad apples in your lines.
    Closing your eyes/ears (i.e. tolerance) or (even worse) cover up in cases of obvious and straightforward misconduct will only erode the public trust in all of you.

    Therefore, IT’S TIME FOR CHANGE and RW is the positive alternative!

  14. I think more scientists should start reporting on journalists making up news!
    Oh wait… Most scientists are actually too busy trying to make sense of reality.

    1. I think that denigrating one profession to purportedly save another profession is the wrong way to go. Honest journalists (and the majority fall in that description) are also trying to “make sense of reality.” One can see science not only as a way of “making sense of reality” but also as a constant educational battle against “common sense,” a “sense” that “explains” reality in simple terms that end up denying evolution and supporting faith (faith as defined by the Bible as “the belief in things unseen”.) The battle that scientists should wage, besides “making sense of what appears as nonsense (contradicting “common sense)” is for honesty in science and against the fraud perpetrated by BigPharma against the society at large, not against journalists.

  15. Well, I don’t know much about “hard sciences”, but I suspect there is a lot of fraud going on in the social science disciplines. Strong (and interesting) hypothesis; Cute design; Strong results; And said results are supported by 3 different experiments conducted by the authors (probably at the request of reviewers)?

    When I first realized this (that what I am reading may be based on modified, if not totally made up data), I was very angry (very much like a general public member would be). But the more I realized how wide spread this problem is, AND MOST IMPORTANTLY, what is causing researchers to do this (e.g., No (strong) results, No publication; Publish or Perish; No good pilot study results, No grant approval), I have become much more sympathetic with the fraudsters.

    Come on, they are just playing the game, against unhealthy expectations for research.

    So sometimes when I hear people say: “Umm, this study is interesting, but the results are not very strong… Or, we would like to see another experiment conducted that would corroborate the results”, I just want to scream “Shut the fuck up! Why the fuck do you want STRONG results? Do you know what such stupid expectations can lead to?”

    So, in a nutshell, my current position is that I am largely sympathetic with the “academic fraudsters”. What does need to change are unrealistic expectations.

    1. “I am largely sympathetic with the “academic fraudsters”. What does need to change are unrealistic expectations”

      Are you sympathetic with the rapists, because the victims had “unrealistic expectations” about it?!?
      SHAME ON YOU!!!

      1. littlegreyrabbit: either you’re trying to make a joke and failing horribly, or you’re just a horrible human being.

      2. To littlegreyrabbit and Reviewer #3: Joke, what joke? Horrible human being? What human being? Mr. Reviewer #3, my humble opinion is that you should not waste time commenting on the idiocies uttered by rabbits posing as human beings. They are incapable of any thought, intelligent or otherwise. Their IQ fluctuates with the room temperature. You ever heard of poikilothermy? Poikilo=Varied, Thermos=Temperature. Toads are poikilthermic, their temperature varies with the exterior temperature. Well, greyrabbits display poikilognosia, their IQ varies with room temperature but can’t go above 69; that is, never better than “moron.”

  16. In reply to omnologos, September 24, 2012 at 8:32 am

    RE: Crime

    Data fabrication is NO different than money fabrication.
    Publication duplication is NO different than money duplication.
    Plagiarism is NO different than robbery.
    All of the above cases are about personal gain by deception.
    The only difference is that the former are REPEATED and PERMANENT offences until papers are retracted.

    While the later are universally acknowledged as crimes (blue-collar crime, white-collar crime), WHY the former isn’t?

    Herewith I suggest a new term.
    What about “Silk-collar” crime for academic fraud?

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.