The retraction process needs work. Is there a better way?

Daniele Fanelli

Retractions take too long, carry too much of a stigma, and often provide too little information about what went wrong. Many people agree there’s a problem, but often can’t concur on how to address it. In one attempt, a group of experts — including our co-founder Ivan Oransky — convened at Stanford University in December 2016 to discuss better ways to address problems in the scientific record. Specifically, they explored which formats journals should adopt when publishing article amendments — such as corrections or retractions. Although the group didn’t come to a unanimous consensus (what group does?), workshop leader Daniele Fanelli (now at the London School of Economics) and two co-authors (John Ioannidis and Steven Goodman at Stanford) published a new proposal for how to classify different types of retractions. We spoke to Fanelli about the new “taxonomy,” and why not everyone is on board.

Retraction Watch: What do you think are the biggest issues in how the publishing industry deals with article amendments?

Daniele Fanelli: The issues are fundamentally three, and they are closely interconnected. First, the formats of amendment issued by most journals are too few, often consisting of only two types: “corrections” and “retractions.” Second, the information conveyed by these amendments is very limited. Not only, as RW has highlighted many times in the past, editors are often reluctant to accurately portray the causes underlying an amendment, but more generally I think that the format of a short notice of correction or retraction often impedes effective communication of the nature of errors that can have important repercussions for a broader literature. Thirdly, scientists have little incentive to “do the right thing” and promptly amend any scientific or ethical flaws in their work. Without an active participation of authors, amendments are rarer and harder to produce than we would like. To be fair, much progress has been made on all these fronts, but more and more concerted innovation is needed.

The three issues are connected to each other, by the common thread “information.” Scientists would be encouraged to step up and do the right thing if they could count on the fact that their actions are recognized and praised accordingly. This, we argue, would occur if more information concerning an amendment was conveyed, upfront, by the type and the format of the amendment itself.

RW: You present a new “taxonomy” of amendments, including five new proposals. Can you briefly describe them?

DF: Perhaps it would be more accurate to say that our taxonomy re-arranges what are mostly already existing categories, lumping some together, and making others stand out more. By and large, we tried to take stock of ideas that have been tried out or at least proposed. The latter include:

Withdrawal: This is a long retraction notice, in which the authors of a paper previously published in that journal explain in depth why that paper, and maybe others, can no longer be trusted. It is peer-reviewed, and it is the “self-correction” equivalent of what some journals call “matters arising”. The latter are usually conveying debates or criticisms of other people’s work, whereas in a withdrawal it is the authors themselves who inform the community.

Retired: an article is retired when it offered guidelines or made practical recommendations that are deemed outdated and cannot be updated.

Cancelled: this is a full retraction of a paper, which is caused by an editorial or production mistake and the authors bear no responsibility. This category stands to ordinary retractions the way “errata” stand to “corrections”, at least in some journals.

Self-retraction: this is a full retraction of a paper, presented as a short notice, distinguished by the fact that it was first solicited by the authors themselves, jointly, and is therefore signed by the authors themselves.

Removal: is an “emergency” and possibly temporary withdrawal of an article, due to exceptional circumstances that connect its contents to a significant public risk.

Notice how the common theme of all the categories above is that the authors are either not responsible for the problem, or they are actually owning it. By distinguishing upfront these cases, we hope to make the system more fair, fluid and supportive of amendments. Authors either pay no reputational cost, or in some cases gain in reputation and in having a new publication, in exchange for retracting one.

RW: One of the categories you propose is a “self-retraction,” designated for when researchers ask to retract their papers because of an honest error (a separate designation you’ve suggested before). How would something like this be adjudicated, so readers could be sure the problems were due to honest mistakes?

DF: Well, the idea is that it is based on the process itself. As mentioned above, authors can only publish a self-retraction if, before anyone else notices any problem, they jointly agree that the paper is flawed and contact the journal to communicate it. The most common criticism against this idea is that the system might allow some authors to remove fabricated data from the literature, by pretending it was an error. I don’t believe this is a significant problem, for various reasons. Firstly, misconduct is typically perpetrated by single individuals, unbeknownst to other team members. To get an undeserved self-retraction, therefore, all team members would have to collude, or the fabricator would have to lie to them. One may get away perhaps with removing a single paper in which data was fabricated (in which case we all benefit anyway), but authors who generated a stream of self-retractions would give rise to obvious suspicious and in any case earn a reputation for unreliability. Finally, note that self-retracted papers, just like ordinary papers, remain available for inspection. Therefore, misconduct could still be proven after the amendment, and the self-retraction could be turned into an ordinary retraction.

RW: You note that the new taxonomy may seem complicated, but argue it actually simplifies the current system. Can you say why?

DF: At the moment there is no standard, which creates much confusion. One of the most widely used taxonomies at the moment, developed by CrossMark, includes 12 categories (i.e. addendum, clarification, correction, corrigendum, erratum, expression of concern, new edition, new version, partial retraction, removal, retraction, withdrawal). This taxonomy is only marginally smaller than the one we propose, and operates distinctions that may be unnecessary – for example,  between corrections and corrigendum, addendum and clarification.

In our case, each condition requiring an amendment is associated with a single type of amendment, which can be determined based on factual information, as we show in a tree diagram.

RW: How have the publishing community and authors reacted to your proposal so far? Which aspects of your proposal do you expect will be most debated/controversial?

DF: The very idea to differentiate editorial categories, and some of the categories themselves, might not convince everyone, and some of the reasons I have mentioned above. Many of the criticisms we are likely to receive have been discussed and examined in a workshop that we organized with the METRICS team in Stanford, about a year ago. I take this opportunity to express our gratitude to all of the participants, whose names I will like to acknowledge: Patricia Baskin (Council of Science Editors), Philip Campbell (Nature), Catriona Fennell (Elsevier), Jennifer Lin (Crossref), Emilie Marcus (Cell), Ana Marusic (European Association of Science Editors), Ivan Oransky (Retraction Watch), Kathy Partin (US Office of Research Integrity), Iratxe Peubla (PLoS ONE), Bernd Pulverer (EMBO), Jason Rollins (Thomson Reuters), Elizabeth Moylan (BioMed Central), Hilda Bastian (National Library of Medicine),  Ijsbrand Jan Aalbersberg (Elsevier), Annette Flanagin (JAMA), Virginia Barbour (COPE).

The original objective of the workshop was to produce a consensus document. That project didn’t quite work out, but the input of so many outstanding experts has helped forge a core set of ideas which we think are simple and general enough to be considered and experimented with.

RW: You note that your proposal differs from another recently suggested by experts, which proposed adding material to an article and marking the issues as either “insubstantial,” “substantial,” or “complete.” Why do you think your system is a better way?

DF: Because it meets the same objective, and a few more. Simplifying and streamlining amendment categories would unquestionably make the process of amendment more fluid, which is one of the issues that we agree needs addressing. However, such simplification would do little to solve the other issues I listed in the first answer, because it removes a lot of relevant information that amendments could convey.  If instead of lumping and simplifying amendment categories we operated a few more distinctions amongst them, we would still make most amendments easier and faster, because authors would be willing to cooperate.

RW: It’s one thing to propose a new way of doing things, and it’s another to get people to adopt it on a widespread basis. What are your plans in that area?

DF: Primarily, we hope to encourage more innovation and experimentation in this area. The specific taxonomy we propose could have various uses, including the online tagging of amendments, the retrospective classification of cases, and as a blueprint for journal amendment policies. However, policies should ideally be informed by evidence. Therefore, we hope that a few journal editors will adopt one or more of the categories proposed on an experimental basis and share their experiences.

Like Retraction Watch? You can make a tax-deductible contribution to support our growth, follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up for an email every time there’s a new post (look for the “follow” button at the lower right part of your screen), or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at team@retractionwatch.com.

6 thoughts on “The retraction process needs work. Is there a better way?”

  1. If the author of an article finds plagiarism in his article before publishing and contact with the journal to refrain from publishing in one email and request for making changes in the article in another email, but the journal does not agree with this requests. what is the responsibility of the authors? Will authors be punished if the problem discovered?
    Please guide.
    Thanks

    1. Dear Al-saied, my best advise would be to contact COPE to discuss such a case. Although it is not very clear to me what the position of the journal is.
      If the journal refused to issue any correction/retraction, then its behaviour ought to be criticized and reported. But if the journal simply refused to alter an article that was already page-proofed, because at that stage it requires the author to publish a separate correction or retraction instead, then that might reflect a legitimate policy that many journals have.
      Under our taxonomy, that would clearly be a mistake of the author, communicated by the author, and therefore would configure as either a correction or a self-retraction.

  2. Good idea, but sounds slightly complicated. 5 categories are more difficult to remember than 2. Presently there is some attempt to distinguish between retractions due to error and those due to misconduct. Perhaps more effort could be paid to this. When retractions occur because of misconduct considerable efforts should be made by the investigators to determine who committed the misconduct to avoid sentencing the innocent. We should not forget that the driving force in correcting the literature is public criticism and that requires work. There is no substitute for work. https://peerj.com/articles/313/

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.