“Research misconduct accounts for a small percentage of total funding”: Study

elifeHow much money does scientific fraud waste?

That’s an important question, with an answer that may help determine how much attention some people pay to research misconduct. But it’s one that hasn’t been rigorously addressed.

Seeking some clarity,  Andrew Stern, Arturo Casadevall, Grant Steen, and Ferric Fang looked at cases in which the Office of Research Integrity had determined there was misconduct in particular papers. In their study, published today in eLife:

We found that papers retracted due to misconduct accounted for approximately $58 million in direct funding by the NIH between 1992 and 2012, less than 1% of the NIH budget over this period. Each of these articles accounted for a mean of $392,582 in direct costs (SD $423,256). Researchers experienced a median 91.8% decrease in publication output and large declines in funding after censure by the ORI.

One percent is not too far off from the percentage of scientists who admitted to committing misconduct at least once, 2%, although that may be comparing apples to oranges since frequency would determine a great deal. Still, Fang tells Retraction Watch the figures “turned out to be a smaller sum than we expected:”

As discussed in the paper, there are a number of caveats, and one can certainly come up with reasons that the figure might be either an over- or an underestimation.  But when one compares the total with the estimated $31-60 billion out of a total of $206 billion spent on contracts and grants to support U.S. operations in Iraq and Afghanistan that were lost to fraud and waste in FY2011 alone, the funds wasted on fraudulent research seem modest relative to some other government agencies and programs.

That’s the kind of comparison we hear from a lot of scientists who seem to want to argue fraud is very limited, so that members of Congress won’t find excuses to cut the NIH budget. But it’s a mistake to minimize the impact of fraud, says Fang:

One could say that any dollar spent on fraudulent research is a dollar too many.  And the direct costs are arguably among the least significant costs of research misconduct.  In some cases patients have been harmed by flawed or fraudulent research– think of Wakefield, Boldt and Potti.  The damage to researchers’ careers (who in some cases have done nothing wrong themselves), the institutional costs of investigating suspected fraud, the misdirection of public policy, and the misleading of other scientists who pursue false leads are other important costs.  It is hard to place a specific monetary value on these costs, just as it is hard to quantify the damage to public confidence in science that results from research fraud, but these costs are nevertheless important.

For example, one paper found that a single investigation cost an institution more than $500,000, more than the direct research costs. And another found that ORI cases could cost as much as $2 million. Fang again:

And what of the research that didn’t get done because a fraudster got the funds instead?  I think of this in the context of the Dong-Pyou Han case.  Millions of dollars that went to his lab might have done a lot more good if it had been spent elsewhere.  We have to acknowledge that dishonest research is very costly, and the numbers that Andrew, Grant, Arturo and I calculated in this study are just one small part of those costs.

The researchers also looked at the effects of ORI findings on individual researchers. The findings were mostly, but not completely, what you might expect:

Censure by the ORI usually results in a severe decrease in productivity, in many cases causing a permanent cessation of publication. However the exceptions are instructive. Of 35 faculty ORI cases analyzed in the Web of Knowledge, five actually published more articles per year after an ORI report than before: Raphael Stricker (22), Gerald Leisman (23), Oscar Rosales (24), Ruth Lupu (25), and Alan Landay (26).

We asked Daniele Fanelli, who has studied misconduct, for his take:

I was maybe a little surprised, but I think that the authors of the study were more surprised than me. They rather openly expected costs to be higher than what they found. The fact that these scientists, showing integrity, reported a somewhat “disappointing” result makes the finding all the more believable.

The question of collateral damage, by which I mean the added costs caused by other research being misled, is controversial. It still has to be conclusively shown, in other words, that much research actually goes wasted directly because of fabricated findings. Waste is everywhere in science, but the role played by frauds in generating it is far from established and is likely to be minor.

These results send an important message: we must be careful not to over-dramatise the issue of misconduct. It remains an important problem to study, monitor, popularize and, certainly to try to prevent. But we should not get too carried away by the “hotness” of this topic, and let it absorb too much of the attention of policymakers or educators.

19 thoughts on ““Research misconduct accounts for a small percentage of total funding”: Study”

  1. And a related issue, as I have mentioned before, the amount of the grant that was directly involved in the research misconduct (respondent’s salary, supplies, publication costs, etc.) is a very small fraction of the total grant. That is why when NIH tries to recover grant funds from institutions, NIH has to consult with ORI on the scope of the misconduct and decide itself whether the cost to taxpayers and NIH budget of pursuing the recovery would be worth the possible recovery.

    I recall in folllowup to one ORI case, the institution found (even though the respondent had cited an NIH grant in the paper’s acknowledgments) that there actually had been no expenditure of NIH funds for that project. So no recovery could have been pursued.

    1. In principle this seems like a good idea, until you do true-cost-economics when determining if recovery of funds is “worth it”. Recovering funds sends a stronger message that misconduct will not be tolerated, so has a deterrent effect that multiplies the amount of money actually recovered. Conversely, not recovering funds sends the message that misconduct is not punished hard, so more fraud occurs as a result. Much as I hate Rudy Giuliani, broken windows probably applies here – people will do what they think they can get away with.

      As Dave Fernig articulates in his comment here (and I’ve heard independently from people close to ORI), the ORI only goes after really solid cases involving NIH funded work and easy to bring to conclusion. Add in collateral costs to the scientific community and a multiple of at least 10 on the true costs seems like a more reasonable number.

      The comparison with war spending is an interesting one, and yes unfortunately waste in the military is rampant. The problem is the Pentagon is like a 3rd rail in politics and they have way more big business lobbying on their side. I don’t see the likes of VWR, BioRad and ThermoFisher spending money like Raytheon or Halliburton, to protect their pet government funded customers. Senate/Congress will go after the low-hanging fruit when it comes to saving money, and unfortunately the big defense contractors long ago helped the military wasters climb very high up the tree, out of harms way.

  2. Almost certainly those caught red-handed in misconduct, are but a fraction of all commissions. But more importantly, such analysis doesn’t even look at sheer sloppiness or poorly-done (or interpreted) studies… those that don’t reach the level of outright “misconduct,” but do reach a level where the worth of the study is hugely compromised.

  3. Is this a serious issue? Absence of evidence is not evidence of absence.

    What of the many grants containing honest, not so sexy data that were turned down when the sexxier grants (containing science-fraud) were approved?

    “These results send an important message: we must be careful not to over-dramatise the issue of misconduct.” I have been in several laboratories, not one that doesn’t condone fiddling datasets in some way or other from students, their supervisors, doctors, professors, heads of departments and Deans. Several complaints, no action. Unseen and unreported “misconduct” is integral to the academic environment. There are a few good eggs, but they are few and far between.

    It is a real, ever-growing, increasingly serious problem.

    “We found that papers retracted due to misconduct……less than 1% of the NIH budget ” – hmmmm. What about the now all-too-common “corrections”?

    It would be most interesting to account for all papers with “corrections” and grant income.

  4. We should note:

    (1) The very small fraction of misconduct in the US that goes to the ORI.

    (2) Misconduct that is beyond the time limit of ORI.

    (3) Misconduct elsewhere on the planet.

    In September 2013 I calculated that the number of retractions made for the “right” reason (under the category here of ‘doing the right thing’) was less than 5% of retractions.
    http://ferniglab.wordpress.com/2013/09/16/getting-science-right-side-up/

    At the time, Retraction Watch had covered some 1200 retractions. I think the costs calculated by Andrew Stern, Arturo Casadevall, Grant Steen, and Ferric Fang can safely be multiplied by 10, before we even being to count the cost to the rest of the community trying to replicate and the human cost in terms of the effect of misconduct on scientists in training in the relevant labs and for clinically related work, on patients.

    1. So, essentially, we are observing what could be loosely described as the ‘too big to fail’ aspect of the research world?

      It appears that prominent researchers who build careers upon fraudulent data will then proceed to mentor younger hopefuls whose degrees are awarded based on this research, thus entrenching them into the coverup cycle and so forth. This phenomenon then becomes contagious as the papers spread to other labs, with the potential to become an epidemic.

      Further, companies, especially pharmaceuticals, who invest large amounts of money into potential life extending/ enhancing drugs based upon these unsound numbers from the NIH research will then expect a return of investment (i.e., positive results), therefore condemning and hurting the very tax payers who gave their money towards NIH funding initially, which will happen through questionable clinical trials and possibly drug releases.

      If you pull out one block from the tower, the whole structure may come tumbling down.

      The power of prevention and maintenance should never be overlooked.

  5. There is an infographic on the ithenticate website that is worth looking at. It estimates of the cost of a single misconduct in the US at $525,000 and the total cost of misconduct investigations in the US in 2010 at $110,000,000. In my own case, I paid out-of-pocket over $200,000 for a qui tam case that I lost. The role played by the ORI was an embarrassment and the courts are not interested in wrong doing, only applying the letter of the law. See my website (www.helenezhill.com) and the article in Nature (http://www.nature.com/news/research-ethics-3-ways-to-blow-the-whistle-1.14226).

  6. Is it any surprise that the ten or so cases a year of misconduct identified by ORI correspond to a tiny fraction of NIH funding? They fund thousands of grants.

    1. People disagree with me? They would have expected this to be a significant fraction of NIH funding? I’m puzzled.

  7. Assessing impact of scientific fraud based on successful ORI complaints is like trying to measure impact of sexual violence based on successful convictions.

    In both cases most incidents are not reported and only a small portion of those reported result in successful prosecutions. Pangloss in, Pangloss out

    1. Sexual violence and other crimes are calculated in the United States through the National Crime Victimization Survey (NCVS). My colleague, Douglas Adams at the University of Arkansas, have published an article on ways research could learn from criminology when it comes to measuring and reducing research misconduct and other bad behaviors:

      Research Misconduct and Crime Lessons from Criminal Science on Preventing Misconduct and Promoting Integrity
      Douglas Adams, Kenneth D. Pimple
      Accountability in Research
      Vol. 12, Iss. 3, 2005
      http://www.tandfonline.com/doi/full/10.1080/08989620500217495 (paywall)

      We also responded to ORI’s “Request for Information to solicit Suggested Research Topics for Future Initiatives for the Research on Research Integrity Program” in 2011. This one is free to everyone: https://nationalethicscenter.org/groups/trewde/blog/2012/01/our-suggestions-for-the-research-on-research-integrity-program—pimple-and-adams

      Ken

      P.S. How do you embed a link in these comments?

  8. I’d be more curious to see what the publication rate is 2 or 3 years later for people censured by ORI – many (if not most) NIH-funded researchers have several papers ‘in the pipe’ at any one time, and given the usual delays between submission and publication, you’d expect there to be a handful of papers still mid-process at the point of ORI censure. But three years later, any impact of censure on productivity would have had time to take effect and show in the publication rate.

    1. Thank you for this question. We compared the publication output of PIs who were found to have committed misconduct during the 3-year (figure 2B) and 6-year (figure 2D) intervals before and after an ORI report. A 69.5% decline in publications was found during the 3-year post-report period, and a 74.6% decline during the 6-year post-report period. Although some have argued that ORI sanctions are too lenient, we found that a misconduct finding is often a research career-ending event. The median annual number of publications per investigator during the 3- and 6-year intervals after a misconduct finding was zero.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.