Clarivate to stop counting citations to retracted articles in journals’ impact factors

Clarivate will no longer include citations to and from retracted papers when calculating journal impact factors, the company announced today

The change comes after some have wondered over the years whether citations to retracted papers should count toward a journal’s impact factor, a controversial yet closely watched metric that measures how often others cite papers from that journal. For many institutions, impact factors have become a proxy for the importance of their faculty’s research.

Retractions are relatively rare and represent only 0.04% of papers indexed in Clarivate’s Web of Science, according to the announcement. But the overall retraction rate has risen recently, to about 0.2%, which, along with a decrease in the time it takes to retract papers, motivated the policy change. Nandita Quaderi, the editor-in-chief of Web of Science, said in the announcement the policy would “pre-emptively guard against any such time that citations to and from retracted content could contribute to widespread distortions in the [journal impact factor].”

Clarivate publishes impact factors annually in its Journal Citation Reports. The impact factor represents the number of citations in a given year to works published in a journal in the previous two years, divided by the total number of citable items published in those previous two years. 

Starting with the 2025 Journal Citation Reports, Clarivate will exclude citations to and from retracted articles in the numerator, “ensuring that citations from retracted articles do not contribute to the numerical value of” the impact factor, the announcement stated. Retracted articles will remain in the article count for the denominator, “maintaining transparency and accountability.” 

“This decision makes intuitive sense but could incentivize against retraction,” bibliometrics expert Reese Richardson said. By keeping retracted items in the denominator of the equation, “this deepens the impact that any given retraction will have on a journal’s [impact factor],” he told us. He said he also wonders “how many journals will actually see a substantial reduction” in impact factor as a result of the change. 

Quaderi told us Clarivate would stop counting citations once the paper is retracted, but would keep those that occurred before. The company will continue to use the Retraction Watch Database to flag retracted papers in indexed journals, which it has done since 2022.

Clarivate typically releases the annual Journal Citation Reports in late June. The JCR incorporates information from impact factors to assess the overall standing of its indexed journals. The company also suppresses impact factors for journals with abnormal citation behaviors

Quaderi told us this change would not impact a researcher’s h-index, another metric that measures citation behavior and productivity. In other words, when Clarivate calculates h-index, it won’t remove retracted papers – or citations to those papers – from the calculation. 

In 2011, Arturo Casadevall and Ferric Fang, who is now a member of our parent nonprofit’s board of directors, showed using the Retraction Index that journals with higher impact factors tended to have more retractions, for unclear reasons.


Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on X or Bluesky, like us on Facebook, follow us on LinkedIn, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].


Processing…
Success! You're on the list.

9 thoughts on “Clarivate to stop counting citations to retracted articles in journals’ impact factors”

  1. On the point around incentivisation – Journals indexed in Web of Science Core Collection are subject to periodic re-evaluation and those that no longer meet our 24 quality criteria are delisted. A journal risks being delisted if they do not retract compromised content.

    Our priority is to ensure that we provide the research community with trusted content. For us to consider an exception to our standard de-listing and/or embargo policies, publishers need to provide compelling evidence that all content of concern has been investigated, and appropriate editorial expressions of concern or retractions have been issued. Journals are not penalized for issuing retractions, which we recognize as a normal and necessary part of correcting the scholarly record, but we do need to reflect the current status of that published material, as citations to and from retracted content must be treated with caution.

    1. RE: “A journal risks being delisted if they do not retract compromised content.”

      For context: In 2024 Clarivate removed just 56 journals from the Master Journal List (MJL) for quality control reasons (“Editorial de-listing”). It accepted 591 new titles into the MJL in that time and the 2024 Journal Citation Reports (JCR) covers 21,848 journals.

      It would appear that de-listing is quite infrequent and thus the “risk” a journal faces for being delisted is quite small.

    2. > Journals indexed in Web of Science Core Collection are subject to periodic re-evaluation and those that no longer meet our 24 quality criteria are delisted
      Clarivate seems to be the only entity that has the power to change things at the side of publishers. Like have editors and publisher scramble to save their listing by finally retracting some of the published fraud.
      When I started sleuthing I had the hope that COPE could do some good. But not really. The people are nice. But for the rest COPE is just powerless flowchart bureaucracy and ethics window dressing of publishers. I see COPE being plainly ignored by big publishers, with them just giving up after asking for over two years on the progress in the investigation of e.g. a plagiarized piece or some papermill series.
      Could Clarivate empower COPE by having a more active role in publishing ethics? For example, knowing what COPE complaints come in and how/if they are handled by journals will certainly add a number of important quality indicators.
      It may also be good to add an indicator: require publishers to have a functioning complaint management system that provides receipt confirmation, complaint IDs, and a portal with tracking information. Those things are all possible on the submission side of the publishing enterprise. But when one reports fraud there is nothing like that. Receiving no answer at all is common. Or a Springer auto-reply without any ID or follow up. Or a 1:1 e-mail conversation with an editor on his university e-mail address. This evades any form of quality tracking. And I fear it may be organized like this to exactly achieve that.

  2. I really welcome this change and hope other indexing systems follow suit. I am curious to understand the decision not to reflect this change in h-indexes though, as this is currency just as much as impact factors are – so I would imagine it could act as a deterrent for researchers tempted to engage in misconduct.

    1. This step might actually trigger publishers into retracting even less as retracted papers will no longer contribute to the IF. So I fear this might be a double edged sword.

  3. Just because the overall retraction rate is low doesn’t mean that the retraction rate at a given journal is low. This is a good choice to avoid corruption of the journal ranking statistics.

  4. Great decision. I hope the act of keeping the pre-retraction citations does not result in journals delaying retractions to gain more “valid” citations.

  5. Take this further: ding authors -20 points on their h-index for every retraction they have. Maybe throw in a mulligan for the first one, 20 points off for each subsequent case.

  6. The variety of reforms that could help are intriguing, but until people/institutions stop using quantitative metrics as a shortcut to measure the more important qualitative contributions of a paper/researcher/journal/institution… none of these discussions will matter very much. Everyone has an incentive to game the metrics.
    I have had some success of getting reluctant researchers/journals/institutions to address problematic images in their papers partly by (distastefully) naming names on social media and ridiculing them. On those platforms, anyone can stand on a box and yell “The king is a fink!” which encouragingly has had a real impact on getting some problems fixed.
    It seems to me (an outsider) that the polite, behind closed doors, conversations, have often had less success than my crude online remarks.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.