Last year, an audit by the U.S. Government Accountability Office found “a potential for unnecessary duplication” among the billions of dollars in research grants funded by national agencies. Some researchers, it seemed, could be winning more than one grant to do the same research.
Prompted by that report, Virginia Tech’s Skip Garner and his colleagues used eTBLAST, which Garner invented, to review more than 630,000 grant applications submitted to the NIH, NSF, Department of Defense, Department of Energy, and Susan G. Komen for the Cure, “the largest charitable funder of breast cancer research in the United States.” The approach was not unlike those by publishers to identify potential article duplications.
In a Comment published today in Nature, they report that they found 1,300 applications above a “similarity score” cutoff of 0.8 for federal agencies, and 0.65 for Komen documents — “with 1 indicating identical text in two same-length documents, and more than 1 representing identical text in one piece that is longer than the other.”
When they manually reviewed those 1,300:
We found that 11% of the pairs with a similarity of more than 0.8 (or 0.65) had overlapping aims, hypotheses or goals. For these 167 pairs the total money involved was around $200 million (including both grants of the pair) over the entire time records are available. The average size of the first award was 1.9 times that of the potentially overlapping one, so an estimated $69 million of possible overlap funds were found.
The authors note a number of limitations of their analysis, notably that they did not have access to entire grants, which would have required Freedom of Information Act (FOIA) requests — and that data from some agencies was incomplete. (A Nature news story based on FOIA requests for about 22 of the potential duplications Garner’s team uncovered describes several cases in detail.) Still, the authors write:
Even if $200 million in duplicated grants represents the full extent of the problem, then some may argue that less than 0.1% of funding since 1985 is too small an amount to warrant concern. But that it is research money that cannot be used to fund the next scientific breakthrough.
If not the next breakthrough — given how rarely those actually occur — then the money could at least fund projects that fell below current funding lines. The authors call for a “central database of grant information from all agencies” that could be searched for potential duplicates.
It is not clear from this story, or the Nature story, whether the researchers looked at applications or funded applications, which would make a big difference.
Early in the Nature story (not in the detailed descriptions) the authors state “… we systematically compared more than 850,000 funded grant and contract summaries submitted to five of the largest US funders of biomedical research.” So it seems they were looking at funded proposals only.
As they mention later on, their analysis wouldn’t have picked up on cases where the p.i. had informed both funders of the overlap and the agencies had made appropriate adjustments, which would be a false positive. Possibilities for false negatives are also of course strong.
Also, people talk a lot about the importance of replicating results in science. Yet, there is no funding for replication work. Perhaps some of this duplicate applications actually tried to replicate results by doing very similar studies more than once.
They need to be more sophisticated with their definitions- what are the sizes of the Komen grants? Are they supplemental or can they actually achieve funding of stated aims. Many foundation grants that are smaller can be for similar big picture objectives that are part of larger gov. project grants. Grants that are seeing significant loss of purchasing power or cut budgets do to funding issues. It is not clear they are performing the analysis in a meaningful way.
I was wondering the same. Here in Denmark we have several private foundations that quite often only give partial funding. That means you have to apply with the same project at different foundations in order to get all the money you need.
What do you mean with partial funding? That only small projects (with a maximum budget) would be funded, thus allowing only sub-goals to be achieved, or that only one aspect (like human or material resources, travel) is funded?
Maybe what I mean is best shown using examples. Example 1: One of my colleagues just got about half the money she needs to buy an instrument, which of course is useless unless we find the other half elsewhere. It’s not because they have an upper limit, it’s just their habit to cut the requested funding, telling you to find the rest somewhere else.
Example 2: I have a PhD student who will for certain use more money on various resources than the fixed amount of expenses that is part of her funding. We’ll have to apply for money for those resources, but the project will be exactly the same as the application for her PhD project.
Isn’t this a total non-issue?
Obviously 0.1% of total research funds isn’t significant. Not worth any time to correct, even if it were a problem.
If you get funding from multiple sources you merely hire more people, buy more stuff, and do your work that much faster. As far as I know, it is not possible, anywhere in the R&D community, to get two identical grants, do the work and hire the people for only one, and pocket the difference. It would be malfeasance if you could pocket the money, but you can’t.
Also, I think it is near universal for grant applications to require a statement of other funding you receive or have applied for. So to the degree that a solution is needed, it is already in place.
The purists don’t get that science, like the arts, needs a certain amount of dirt, chaos and randomness to thrive. It’s like simulated annealing in learning algorithms, you need noise to get the system out of local minima.
This reminds me: has anyone ever run across plagiarism (of others’ text, not duplication of one’s own) in grant proposals?
This one was even worse…
“Leo A. Paquette, Ph.D., Ohio State University. An investigation
conducted by the University found that Dr. Paquette had submitted a
grant application to the National Institutes of Health in which
sections of the research design were plagiarized from an unfunded
grant application written by another scientist. Dr. Paquette had
received the other scientist”s application in confidence as a peer
reviewer for the NIH.”
To expand on chirality’s example this linked ORI newsletter from 1993 gives some further examples including the Paquette one (scroll down to “Case Summaries” on page 7).
http://ori.dhhs.gov/documents/newsletters/vol1_no3.pdf
A couple of things stand out:
(1) this stuff has been going on for quite a while
(2) some of the excuses are amusing/pathetic (e.g. see the “explanations” of Freisheim and Paquette for their grant application plagiarism). It doesn’t give much confidence for useful self-explanations of scientific fraud of the sort that Dr. Grant Steen is soliciting on the other recent thread.
This was a very strange case at the time because Leo Paquette was one of the best synthetic organic chemists on the planet. He was under no external pressure to get yet another project funded as his position at OSU was rock solid.
Yes, worse, thanks.
I wonder if duplication of grants alone is sufficient make one feel bad. When I see duplication of data with incremental increase information being published around – I shudder to think of intellectual man hours lost!!! Can we even quantify this?
I simply don’t believe those stories about double dipping. There may be one or two anecdotes (like the one highlighted above of stealing somebody’s proposal), but I simply cannot believe the laborious process of judging grants allows for this. At most I can see that the same research be co-funded by different entities which would certainly make it double dipping by looking at how they did their studies.
This double dipping finding almost always fizzles after added scrutiny. Yet, it will make headlines long enough to be a good talking point when Congress looks to cutting this or that funding agency. It’s a shame Retraction Watch gets taken in that circus.
One could probably make the case that the methodology is not geared towards being capable of making a statement about double dipping. In other words, it shouldn’t pass peer review and if it did, there is a corrigendum or a retraction potential in the end.
I think many of the comments miss the point. PIs are desperate to obtain more funding. Therefore, it is very common to write multiple grants (and have them all get funded if you are lucky) along the lines of:
The role of molecule X and receptor Y in a cell-line model of disease A
The role of molecule X and receptor Y in a cell-line model of disease B
Most of the text, experiments, etc. are the same in the two grants, but they will be reviewed by different agencies, and the reviewers do not know that there is another near-identical grant out there somewhere (since there is no database of under-review grants or papers). For NIH, a PI’s current funding is only required on a just-in-time basis AFTER the reviewers have reviewed the grant and given it a good score. Even then, it is a simple matter to write a well-crafted e-mail to the SRO explaining how there is truly no overlap between the two proposals. This is not really misleading or scientific misconduct, since the experiments are not 100% identical. The real question is, would the money be better spent by giving it to another investigator who is looking at something different (e.g. the role of molecule P and receptor Q) and does not have any grants. This is related to the controversy about limiting the number of R01s that any given PI could hold, since many of these tend to be variations on a theme.
Of 631,337 grants screened, 167 pairs of “suspected duplicate grant applications.” With the caveat that they don’t actually have access to the proposals to check if there is actual duplication.
The sky is falling. Repeat, the sky is falling!
Ivan, where you say the approach wasn’t unlike those used by publishers to approach potential article duplications, it could be noted that Skip Garner’s group originated the dejavu database at http://dejavu.vbi.vt.edu/dejavu/ which has been used to find duplications and led to retractions.
Those saying that this doesn’t matter because it is only a small proportion of total research funding are missing, I think, the sheer difficulty of first flagging any kind of oversight gap or integrity problem in science. The denominator of 850,000 grant applications isn’t telling you the true incidence of a problem because there are false negatives. (There are false positives too, but of the 22 examples we followed up independently of Garner –I’m a correspondent for Nature and co-authored the news story linked above — we were troubled by around half, which seems quite a low rate of false positives.) The bigger picture is that this group is able, within a short timeframe to run an automated search over a very large number of applications and point to a hundred odd examples that are particularly troubling to them. Too often, oversight or integrity gaps only come to light one at a time through extensive work by individual whistleblowers.
Sorry, I disagree. It is basically a first pass, with little effort to get a sense of how accurately it describes the phenomenon. If a student presented me with these data, I would send them back to analyze it properly before I let them present it internally, much less publish it.
Eugenie,
Unless I am mistaken, the study sure doesn’t address the issue of co-funding or does it ?
“…a hundred odd examples that are particularly troubling to them.”
But what, specifically, is troubling about these presumed duplicate proposals?
There is no ethical prohibition about getting more than one source of funding for a project. As far as I know they have no found evidence of violation of any rules relating to disclosure of other funding. No one has been accused of misappropriating grant money.
You’ll need to come up with some examples of actual wrongdoing. Are there any examples where duplicate grant money was successfully stolen by the PI? Has any organization double-billed for the same purchases or labor? Does any grant-giving organization actually feel they have been defrauded?
All you have is the assumption that there might have been malfeasance by someone somewhere.
How much additional money should be directed away from research in order to track down this tiny fraction of the 0.1% total?