That’s an important question, with an answer that may help determine how much attention some people pay to research misconduct. But it’s one that hasn’t been rigorously addressed.
Seeking some clarity, Andrew Stern, Arturo Casadevall, Grant Steen, and Ferric Fang looked at cases in which the Office of Research Integrity had determined there was misconduct in particular papers. In their study, published today in eLife:
We found that papers retracted due to misconduct accounted for approximately $58 million in direct funding by the NIH between 1992 and 2012, less than 1% of the NIH budget over this period. Each of these articles accounted for a mean of $392,582 in direct costs (SD $423,256). Researchers experienced a median 91.8% decrease in publication output and large declines in funding after censure by the ORI.
One percent is not too far off from the percentage of scientists who admitted to committing misconduct at least once, 2%, although that may be comparing apples to oranges since frequency would determine a great deal. Still, Fang tells Retraction Watch the figures “turned out to be a smaller sum than we expected:”
As discussed in the paper, there are a number of caveats, and one can certainly come up with reasons that the figure might be either an over- or an underestimation. But when one compares the total with the estimated $31-60 billion out of a total of $206 billion spent on contracts and grants to support U.S. operations in Iraq and Afghanistan that were lost to fraud and waste in FY2011 alone, the funds wasted on fraudulent research seem modest relative to some other government agencies and programs.
That’s the kind of comparison we hear from a lot of scientists who seem to want to argue fraud is very limited, so that members of Congress won’t find excuses to cut the NIH budget. But it’s a mistake to minimize the impact of fraud, says Fang:
One could say that any dollar spent on fraudulent research is a dollar too many. And the direct costs are arguably among the least significant costs of research misconduct. In some cases patients have been harmed by flawed or fraudulent research– think of Wakefield, Boldt and Potti. The damage to researchers’ careers (who in some cases have done nothing wrong themselves), the institutional costs of investigating suspected fraud, the misdirection of public policy, and the misleading of other scientists who pursue false leads are other important costs. It is hard to place a specific monetary value on these costs, just as it is hard to quantify the damage to public confidence in science that results from research fraud, but these costs are nevertheless important.
For example, one paper found that a single investigation cost an institution more than $500,000, more than the direct research costs. And another found that ORI cases could cost as much as $2 million. Fang again:
And what of the research that didn’t get done because a fraudster got the funds instead? I think of this in the context of the Dong-Pyou Han case. Millions of dollars that went to his lab might have done a lot more good if it had been spent elsewhere. We have to acknowledge that dishonest research is very costly, and the numbers that Andrew, Grant, Arturo and I calculated in this study are just one small part of those costs.
The researchers also looked at the effects of ORI findings on individual researchers. The findings were mostly, but not completely, what you might expect:
Censure by the ORI usually results in a severe decrease in productivity, in many cases causing a permanent cessation of publication. However the exceptions are instructive. Of 35 faculty ORI cases analyzed in the Web of Knowledge, five actually published more articles per year after an ORI report than before: Raphael Stricker (22), Gerald Leisman (23), Oscar Rosales (24), Ruth Lupu (25), and Alan Landay (26).
We asked Daniele Fanelli, who has studied misconduct, for his take:
I was maybe a little surprised, but I think that the authors of the study were more surprised than me. They rather openly expected costs to be higher than what they found. The fact that these scientists, showing integrity, reported a somewhat “disappointing” result makes the finding all the more believable.
The question of collateral damage, by which I mean the added costs caused by other research being misled, is controversial. It still has to be conclusively shown, in other words, that much research actually goes wasted directly because of fabricated findings. Waste is everywhere in science, but the role played by frauds in generating it is far from established and is likely to be minor.
These results send an important message: we must be careful not to over-dramatise the issue of misconduct. It remains an important problem to study, monitor, popularize and, certainly to try to prevent. But we should not get too carried away by the “hotness” of this topic, and let it absorb too much of the attention of policymakers or educators.