Why don’t journalists circle back to cover retractions? A conversation with Malgorzata Iwaniec-Thompson

Vertigo3d via Canva

In a paper published last month in the Journal of Documentation, a team of researchers in journalism, social science and data explore how and why journalists report – or don’t report – on scientific retractions. 

The investigators performed an analysis on news coverage of “high-attention retracted articles” identified from the Retraction Watch database and other sources and also interviewed journalists from the U.K. and Finland to gain a cross-cultural perspective. 

The lead author of the paper, Malgorzata Iwaniec-Thompson, of the University of Sheffield’s School of Information, Journalism and Communication, took questions from us about the work.

Retraction Watch: In your report, you write, “Journalists frequently cover initial research findings but rarely follow up when these claims are later disconfirmed.” Why do you think that is the case?

Malgorzata Iwaniec-Thompson: Journalists rarely follow up on scientific retractions due to a combination of structural, economic, and professional barriers. First, there is an absence of a systematic process to track retractions; journalists often rely on luck, social media, or informal networks to discover them. 

Malgorzata Iwaniec-Thompson

The second issue is that the reporters operate under intense time pressures and resource constraints, often juggling multiple stories with as little as 15 minutes for verification. Sadly, checking for retractions is an uncompensated task. Freelance writers in particular are not paid to perform retrospective checks on old stories. The nature of journalism demands constantly moving to the next hot topic, which discourages looking back at “old news.” 

Lastly, journalists often view their archived work as “best efforts at the time” rather than part of a permanent academic library that requires continuous curation. Not all see a need to publish updates when a study they once covered has been retracted.

RW: A key finding of your study is “that retraction coverage often functions less as corrective science communication and more as a space where credibility, authority and the legitimacy of expertise are actively contested.” You note, “news stories can turn examples of successful detection and correction into a suggestion of widespread corruption and conspiracies.” Why is this trend occurring? How might coverage of retractions increase rather than decrease public trust in science and in journalism?

Iwaniec-Thompson: The trend of framing retractions as evidence of systemic corruption or conspiracies occurs because news stories – and also blogs – often focus on individual fraud and data integrity to create human interest narratives that attract more engagement. This results in retractions being reframed as a space where credibility, authority and the legitimacy of expertise are actively contested rather than as a normal part of scientific self-correction. 

It’s hard to increase trust in today’s society; however, highlighting retractions as evidence of effective self-correction in science can actually boost institutional credibility. In our interviews, journalists talked about signalling uncertainty with phrases like “early results suggest” that can maintain reader trust if findings are later disconfirmed. 

Above all, treating a retraction as a “continuation of the news story” – providing plain language updates that explain how the original claims have changed – can bridge the gap between scientific correction and public understanding, but that would require journalists to write new stories when retraction occurs.

RW: Science journalism is not the same in every culture, and your paper confirms that. What were some of the key cultural differences you discovered when you compared coverage of retractions in the U.K. and Finland? 

Iwaniec-Thompson: In general, most of the findings aligned, pointing, for example, to a universal absence of a systematic monitoring process for scientific retractions. Differently from their counterparts in Finland, U.K.-based journalists spoke about social media negatively impacting trust in media, and that having implications on their science reporting. Finnish journalists by comparison enjoy high levels of public trust in the news media and in science. Most of the Finnish journalists we interviewed thought that the general cultural atmosphere and social media have not significantly influenced their reporting about science. Many stated that it is their responsibility not to let it affect their reporting. 

Some of the journalists in the U.K. told us about their reliance on the Science Media Centre (SMC) for real-time peer review to assess research quality before publishing reports on the science. The primary structural pressure in the U.K. is economic, driven by financial constraints and payment-per-story models. Notably, Finnish science journalists reported having more time and better academic networks to verify research compared to general-interest journalists in Finland.

RW: How can journalists and news outlets ensure they know when scientific work they’ve covered has been retracted? What can scientific publishers do in this regard?

Iwaniec-Thompson: There is a strong desire among journalists for automated tools that can monitor the retraction status of papers previously cited in news articles and alert reporters of any “expressions of concern” or removals. Publishers should adopt systems to automatically and proactively notify journalists, registered users, and researchers who previously accessed or downloaded a specific article when it is retracted.

Although this was not mentioned by the journalists we interviewed, and to some extent this is already happening on the edges, the wider literature suggests a robust DOI resolution system that integrates the original paper, retraction notice, and any republication into a single, inseparable set for the user. A major problem is that retractions are often marked on publisher platforms but not on content aggregators like PubMed or in personalized libraries like Mendeley. Publishers should not make the full text of retracted articles freely available without a clear “RETRACTED” watermark on every page to prevent misunderstanding about a paper’s status. Again, to some extent we already have some of those practices visible. 

RW: Your study shows reporters and news outlets prefer to tell stories of bad behaviors by individuals rather than stories of systemic failures of, say, peer review or academic reward systems. What impact does that have? How might it change?

Iwaniec-Thompson: Our study found that the media prefers stories of individual misconduct (fraud/fraudulent scholars) because they align with a human-interest detective work narrative that attracts more clicks. This focus obscures actual shifting trends where retractions increasingly stem from systemic challenges (e.g., fake peer reviews or procedural errors). It limits the public’s understanding of structural vulnerabilities in academic publishing and the “publish or perish” strain on researchers. Change requires the news media to engage more deeply with the procedural and structural aspects of retraction, moving beyond simple accusations to investigative reporting that scrutinises institutional practices. 

RW: There’s a central irony documented in your study: journalists suffer from the same problem as scientists – “pressures of time, attention and resources” – such that the same sort of reward systems leading to sloppiness in science is also leading to sloppiness in science journalism. What can we do about this?

Iwaniec-Thompson: Newsrooms could build in protected verification time and strengthen editorial support systems to safeguard accuracy. Unfortunately, the rise of “social journalism” (e.g., on Instagram and TikTok) prioritizes community engagement and “what audiences want” over traditional news values, potentially further eroding the time allowed for retrospective verification. 

Academic institutions should support the development of tools to help journalists monitor the scientific record more efficiently. Many journalists in our study expressed enthusiasm for the development of such tools. 

Perhaps a code of conduct for journalism and newsrooms should be more explicit and should strictly enforce the principle to gather, update and correct information throughout the life of a news story. 

RW: You write that the journalists you interviewed “believed that the public considers retractions to be boring, except for high-profile cases.” Do you agree? Is there a reason to report low-profile retractions?

Iwaniec-Thompson: While journalists believe the public finds non-controversial retractions “boring,” our study suggests a clear ethical reason to report them. Ignoring routine cases allows misleading scientific information to persist in the public record. Our paper argues that scientific retraction should be seen as a mandated continuation of the original reporting to bridge the gap between academic self-correction and media accountability.

RW: In your sample, journalists “rarely covered retractions” of work they had previously reported. Retraction Watch recently announced the Ctrl-Z Award to recognize and celebrate “scientists who discover substantial errors in their published work and take meaningful steps to correct the scientific record.” Should there be a Ctrl-Z Award for journalists and, if so, what should it incentivize and reward?

Iwaniec-Thompson: This would be a great idea! Such an award could celebrate journalists who prioritize transparency by publishing follow-up pieces stating, as one journalist we interviewed said, “We were wrong, and here’s what changed.” 


Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on X or Bluesky, like us on Facebook, follow us on LinkedIn, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].


Processing…
Success! You're on the list.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.