Retraction Watch

Tracking retractions as a window into the scientific process

Authors who retract for honest error say they aren’t penalized as a result

with 2 comments

Daniele Fanelli

Are there two types of retractions? One that results from a form of misconduct, such as plagiarism or manipulating figures, and another that results from “honest errors,” or genuine mistakes the authors have owned up to? More and more research is suggesting that the community views each type very differently, and don’t shun researchers who make mistakes and try to correct the record. In yet another piece of evidence, Daniele Fanelli and his colleagues recently published the results of their interviews with 14 scientists who retracted papers for honest errors between 2010-2015. Although much of what scientists said affirmed what Fanelli – based at METRICS (the Meta-Research Innovation Center) at Stanford University – has long argued about retractions due to honest error, some of their answers surprised him.

Retraction Watch: We’ve seen the community reward scientists who retract papers for honest error, including a 2013 paper that showed no citation penalty for researchers who self-retract. Yet the interviewees said they were surprised to realize there weren’t any negative consequences to their self-retractions (some even got kudos for doing it). Why do you think people don’t realize how the community will view honest error?

Daniele Fanelli: Well, part of the reason is probably that, as scientists, we rarely hear stories about retractions in positive terms. We don’t hear them much from the media, for a start. Retraction Watch is a notable and laudable exception, with its “doing the right thing” thread. But I suspect that, even on your website, the stories that are most widely read and reported by the mass media will be the ones about massive frauds or perhaps particularly ridiculous errors. This is partially inevitable, since such cases make damn good stories.

However, we also don’t hear them much in training programs for researchers. Although I have not investigated the matter closely, I believe that most instructional modules on research integrity might be biased towards illustrating all the rules and how things may go wrong, and give examples of how badly scientists might behave. This might be part of the reason why these courses appear to do little to change trainees’ attitudes and, at least according to some studies, make them more — not less — inclined to engage in questionable research practices. By giving negative examples, we might be unwittingly showing to young researchers how much they can get away with.

Therefore, we would perhaps do a better job at promoting research integrity if we divulged more examples of scientists who benefited professionally by doing the right thing. We grow up with an ideal about how science should operate. Evidence that it actually does work that way should be popularized as much as possible.

RW: Despite the fact that all of the papers were considered to contain “honest errors,” thus representing an example of doing the right thing, most of the authors you interviewed said they wanted to correct the paper, and the journal decided on the retraction. Did that surprise you?

DF: Very much. So much that, as we say even in the paper, this finding effectively undermined the main premise of the study. Of course, that fact that it was so unexpected and surprising made this finding the most interesting and informative, at least to me. It made me realize how much of an aversion scientists might have to the idea of retraction, and how reform in this area is badly needed. This finding was one of the inspirations that led me to suggest marking out nominally and bibliometrically what we (should) call “self-retraction.”

RW: Your paper does seem to support your previous idea that there should be a separate system for self-retracting papers affected by honest error, not misconduct, to avoid the stigma. What are some other ways to de-stigmatize retractions?

DF: That, of course, is not a coincidence. It took quite a while to publish this study, but we knew about these findings over a year ago. Interestingly, more or less in the same period other academics and organizations had been making proposals to innovate retractions, which suggests that times are really mature for change in this area. So, here at METRICS we convened a workshop last December on this topic, the results of which will hopefully be published soon. But other proposals are on the table and important changes are actually occurring as we speak. For example, the International Committee of Medical Journal Editors issued, in January, an update of their guidelines which now include the possibility of partial retraction, a new concept with which a few journals like JAMA and Lancet have been experimenting recently.

I tend to be skeptical about one-size-fits-all solutions, and I think that journal editors should just experiment with new formats of amendment and see what works best in their field.

RW: You note that authors experienced roadblocks in retracting their papers. What were those, and how do you think journals and researchers could address them to make the process go more smoothly?

DF: Well, we must emphasize that our sample was small and heterogeneous, so we need to be careful not to over-generalize. But there appeared to be commonalities in the experiences reported. The most important one is perhaps that scientists as well as journal editors have a poor understanding about these matters. So, in addition to de-stigmatizing self-retractions, there might be more good and important work to be done to ensure that policies and guidelines are known and understood. Many of the other problems reported are likely to come as a consequence of such lack of knowledge and experience as well as current limitations in journal policies, which are not equipped to handle cases of self-retractions efficiently.

RW: You note that the paper represents a certain swath of scientists — all had alerted the journals independently of the problems with their papers, and agreed to talk with you about the experience. Given that other scientists who were less forthcoming about the problems in their research might have different responses to some of your questions, how representative do you believe your findings are?

DF: It is really hard to tell. At the same time, not all knowledge needs to be generalized to be useful. Indeed, not all useful knowledge is generalizable to begin with. Most of my work has been quantitative, which in essence means drawing an average over a population. This is the kind of research that excites me the most, but even I have to admit that there are ample pockets of precious information that quantitative studies cannot reach. Sometimes, the best way to learn something new is just to sit down and talk to people. We did that, and got some real surprises.

Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our daily digest. Click here to review our Comments Policy. For a sneak peek at what we’re working on, click here.

Written by Alison McCook

March 27th, 2017 at 9:30 am

Comments
  • TL March 27, 2017 at 9:48 am

    There should be a caveat that the reasons for the retraction must be laid out for public scrutiny. Authors caught in plagiarism or data manipulation should not be allowed to simply yank their dishonest work from the literature with the journal providing “article withdrawn by the authors” as the only explanation.

  • Kevin Lehmann March 27, 2017 at 10:56 am

    I am a bit confused about the difference between an Erratum (which is fairly common and long standing practice) and a retraction. Usually, an Erratum corrects an error or errors that do not totally invalidate the original paper, but I do not believe that is essential.

  • Post a comment

    Threaded commenting powered by interconnect/it code.