A new study from a group of Boston-area economists sheds some light on whether retractions have downstream effects on related fields, particularly when it comes to funding. From the abstract of the working paper, called simply “Retractions,” by Pierre Azoulay, Jeffrey L. Furman, Joshua L. Krieger, and Fiona E. Murray:
We find that scientific misconduct stifle scientists’ pursuit of specific research lines, as we would anticipate if retraction events provide new signals of the fidelity of scientific knowledge. More centrally, our findings show that scientific misconduct and mistakes, as signaled to the scientific community through retractions, cause a relative decline in the vitality of neighboring intellectual fields. These spillovers in intellectual space are significant in magnitude and persistent over time. In other words, there is clear evidence of negative spillovers in instances of “false science” to broader swaths of the intellectual field in which they take place.
To do their analysis, the authors took a cue from Isaac Newton’s “standing on the shoulders of giants,” classifying more than 1,100 retractions as “Strong Shoulders,” “Shaky Shoulders,” and “Absent Shoulders”:
Strong Shoulders means that the retraction does not cast doubt on the validity of the paper’s underlying claims. A publisher mistakenly printing an article twice, an author plagiarizing someone elses description of a phenomenon, or an institutional dispute about the ownership of samples are all examples where the content of the retracted paper is not in question. Shaky Shoulders means that the validity of claims is uncertain or that only a portion of the results are invalidated by the retraction. Absent Shoulders is the appropriate code in fraud cases, as well as in instances where the main conclusions of the paper are compromised by an error.
The authors also code the retractions by intent to deceive, using as one of their sources, as did Fang et al earlier this month, Retraction Watch posts. All of their data are available online. Once they run the analysis, they conclude:
One view holds that adjacent fields atrophy post-retraction because the shoulders they offer to follow-on researchers have been proven to be shaky or absent. An alternative view holds that scientists avoid the “infected” fields lest their own status suffers through mere association. Two pieces of evidence are consistent with the latter view. First, for-profit citers are much less responsive to the retraction event than are academic citers. Second, the penalty suffered by related articles is much more severe when the associated retracted article includes fraud or misconduct, relative to cases where the retraction occurred because of honest mistakes.
Those findings have important implications, one of which, it seems to us, is that it’s even more important for journals to detail why papers were retracted, given that many readers assume fraud when they read an opaque notice. If notices make it clear there was no misconduct involved, the field may not take as big a hit. This is the sort of nuance that is often lost in the discussion of whether highlighting misconduct promotes mistrust in science — a phenomenon we suggest is shooting the messenger.
The results of the new paper may not be surprising, but confirming them still might feel a bit chilling to those working in fields that have seen a lot of retractions:
Our results indicate that following retraction and relative to carefully selected controls, related articles experience a lasting five to ten percent decline in the rate at which they are cited.
That decline is smaller than the average decrease in citations to a retracted paper itself, which the authors concluded was about 65% in an earlier study.
Perhaps more concerning for scientists working in related fields, funding takes a hit.
…these results help explain why we observe downward movement in the citations received by related articles highlighted earlier: there are fewer papers being published in these fields and also less funding available to write such papers.
(We asked Furman for some details on just how much, but Hurricane Sandy is creating other priorities for people in the Northeast, so we’ll update when we hear back.)
Put another way: Even if it’s impossible to prove cause-effect, it’s pretty clear that retractions — or at least the circumstances that lead to them — matter.
In Brazil there are practically no effects. For instance, after a long series of retractions and allegations of fraud in Forensic Entomology, everyone keeps on pretending nothing happened and that claims were political exaggerations, and all involved keep getting as much funding as before. No penalties, no credibility losses, nothing.
Data?
I have used this example in Forensic Entomology as it produced several posts in this blog. Also the 11-papers case in Chemistry is illustrative. I think comments and links given on these posts pretty much show this panorama. This is the way it is like that with all retractions in my country. The exposed authors are pretty well-off and were never officially punished. One of them, Leonardo Gomes, was even ellected principal in his campus at the tenre age of 33 just after the retraction scandal.
Reblogged this on Åse Fixes Science and commented:
From Retraction watch. They discuss a new working paper that has looked at the impact of retraction on nearby papers. I have not read the paper yet, but abstract and RW summary suggests that retractions – especially due to fraud (weak shoulders) has an effect on adjacent papers. Kind of a “behavioral immune response” reaction. Less citations, less work in the field, because one does not want to be associated with the bad retracted work.
So, really, regardless of all those incentives, focus on doing solid work, not fast and flashy work.
The sad thing is that it’s the innocent researchers who maintain their integrity despite being close to misconduct who will suffer from declining funding.
absolutely right Neuroskeptic. I haven’t read the above paper – as it is mentioned that retractions will have influence on funding. I have actually seen that even after retractions, some are successful in getting funding. Look at Steven Leadon casehttp://ori.hhs.gov/content/case-summary-leadon-steven . I think the other authors in those papers are still doing good – esp Prescilla Cooper and Frank Rauscher …
Does not hold good for established “big shots”.
I realize that the present publication is somewhat new, but I was wondering if anyone seen data or otherwise has a sense of how other types of authorial controversy in general might affect a paper, lab, or field? Is there a significant “guilt by association” effect for other types of bad publicity? For example, might high profile criminal or civil cases affect the citation rate for a paper or otherwise negatively affect the publication /funding for co-authors and trainees?
I’m asking for reasons I can’t explain in detail, except to say that I’m considering retracting a very recently accepted (but unpublished) manuscript due to a potentially ugly civil case that is pending between two of the authors (one is the first author). I feel awful, since I want this data to see the light of day, but I figure that I only have one reputation…
My intuition is that as long as there’s more than 3 authors and you’re neither the first or last one, you’re probably clean, because very few people remember or care about the ‘middle authors’…
Unfortunately, I’m the corresponding author. I can’t believe I just typed those words.
Bail out. Admittedly, this reaction is based on a suspicion about which work you’re referring to. Assuming, hypothetically, that (i) the litigation deals directly with the validity of part of the study and (ii) it’s likely to reach trial or settlement in the next year, you might want to put the brakes on. State clearly that the pending litigation is the reason for the hold-up. Two things will happen if you don’t. First, all authors will face the manuscript in cross-examination, forcing each one to endorse every word of it or look like a liar or partisan. Second, during settlement negotiations, it is likely that the parties can agree to wording which will spare your paper from post-hoc attack and sour grapes. There are a bunch of nuances that I’m not getting into, but this is as far as I can go without two paragraphs of mandatory disclaimers, citation to State Bar rules, etc.
Thanks Toby. Though the case details aren’t exactly along those lines, I think your intuition is correct. After 9 months of reviews and revisions, the editors and publisher are getting pretty annoyed at the delay, and my attorney charges $550/hr.
Getting back on topic, as the private funding source, I can say with certainty that I will be less likely to collaborate with academia in the future. CRO’s are less of a hassle.
Perhaps there is another issue with respect to “post retraction” funding. Innovation is an important criterion for funding to be awarded. Prior to retraction, a substantial amount of shelving in the library is deemed to be “solid” and so not worthy of further funding. After retraction, I suspect that this view probably doesn’t change easily at panel level, despite the evidence. This may reflect cultural “conservatism” and a feeling that there is no need to “correct detail”. So retractions (due to fraud and/or sloppiness) may actually be more damaging, since they may actively inhibit funding areas where there is indeed really interesting work to be done.
So bad science is in my view very damaging. It consumes valuable resource and it leads to a mindset that certain areas do not require funding, when they may in fact be the real bottleneck.