To catch a fraudster: Publisher’s image screening cuts down errata, “repeat offenders”

Christina Bennett

When a publisher rolls out image screening on its journals over an eight-year-period, some surprising things happen. For one, researchers whose papers were flagged are less likely to make the same mistake again. That’s according to new findings presented by the American Physiological Society (APS), which began increasingly checking images in accepted papers for splicing and other tweaks before they are published. (Note: they are not the only outlet to institute such checks.) At the recent International Congress on Peer Review and Scientific Publication, the APS presented findings from seven journals, spanning from 2009 when very few articles were checked, all the way to 2016, when all seven journals screened images before publishing them. We spoke with APS associate publisher for ethics and policy Christina Bennett about the data — which also showed that, over time, fewer papers were flagged for images concerns, and those that were flagged were addressed prior to publication (which reduced the number of corrigenda published to correct image errors). What’s more, the percentage of papers with questionable images has fallen by 0.7% each year since 2013.

Retraction Watch: What prompted APS journals to start doing image checks?

Christina N. Bennett: APS has been doing image checks in all accepted articles prior to publication for several years. When we first began, we thought it would serve a dual role.  First, we would better ensure that the digital images published in our 13 research and review journals were free from poor presentation practices and that any major digital modifications were fully described and declared.  Second, we thought that it would help educate our authors about APS standards for digital image presentation. Our standards are similar to the image presentation guidelines recommended by The Journal of Cell Biology.

RW: Can you tell us more about the forensic tools you use?

CB: We use forensic tools that were designed by the HHS Office of Research Integrity for use in Photoshop. The tools provide “fingerprints” of how the images were constructed.  If we determine that an image is modified in a way that is not disclosed to the reviewers and ultimately, the readership, then we reach out to the author and ask for clarification, original captures, and corrections, if appropriate.

RW: Which of the results stood out to you the most? The marked decrease in the number of manuscripts flagged for image concerns?  The reduction in the number of corrigenda published to correct the image concerns? (from 22 corrigenda issued for 25 flagged papers in 2009 versus no corrigenda for the 71 flagged papers in 2016, since the issues were identified before the papers were published)? The fact that the vast majority of corresponding authors who were flagged before did not have issues in subsequent papers?

CB: The decrease in requests for image corrections year on year was the most interesting finding of our analysis. While we knew that the case load was decreasing each year, we had not done a detailed evaluation of what was changing. It was important for us to evaluate how our process had evolved over time and how the types of cases and outcomes had changed over time.  In 2009, this type of digital image assessment was new to both authors and staff.  Now, it is part of publishing workflows for a number of publishers, and authors are more aware that their work is being evaluated for both science and ethics (conflicts of interest, plagiarism, image presentation).

RW: Why do you think there was such a decrease in problems from authors who’d had image issues in the past? Specifically, you found that, among the 190 corresponding authors who had been flagged for problems in prior submissions and submitted another manuscript, only eight were flagged again.

CB: I would like to think that we do not see many “repeat offenders” because they learned from the initial query and now take additional steps during the manuscript preparation process to confirm that they have the original captures for all digital images used in the paper and that the images presented in the paper are consistent with the original captures. And to be sure, we ask those questions during submission.

RW: What’s the current protocol for screening images at APS journals?

CB: Currently, we screen the digital images in the 13 research and review journals published by the APS prior to publication.  Images included the 7 American Journal of Physiology (AJP) titles are screened prior to early view publication.  Images in the non-AJP titles are screened after early view publication but prior to final publication.  We do it this way because the articles published in the AJP titles include more digital images than our other titles.

Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our daily digest. Click here to review our Comments Policy. For a sneak peek at what we’re working on, click here. If you have comments or feedback, you can reach us at [email protected].

5 thoughts on “To catch a fraudster: Publisher’s image screening cuts down errata, “repeat offenders””

  1. We all have John Krueger to thank for the ORI forensic tools. He left the research integrity community a powerful tool. Thank you, John!

    1. Absolutely right on, Ann

      John Krueger, with background as a Ph.D. biophysical physiologist in cardiac imaging, joined us in the early years of ORI, and on his own initiative, created and refined the ORI forensic image analysis tools over two decades, that have been Used Counrtywide and Worldwide to provide proof that research images were falsified or fabricated.

  2. While it is nice that image screening leads to fewer manuscripts submitted with problems, this does not mean that the quality of the published science has been improved. There are 3 possible explanations for the decrease in image problems from “repeat offenders” following the use of image screening tools by publishers. 1- fundamentally honest scientists are being more careful in manuscript preparation. 2- fundamentally dishonest scientists are avoiding practices such as duplicating images that are caught by the image screening software. 3- dishonest scientists were persuaded by the use of image screening software to give up fraud and to do honest science.
    In the first case, the reproducibility of the work from the “repeat offenders” should have been high to begin with and would remain so after image screening was instituted. In the second case, the reproducibility should have been low and would remain so. In the third case, the reproducibility of published work should have been low but would increase after the institution of image screening.
    Unfortunately, the APS has no data on the effect of image screening on data reproducibility. Thus, all that they can say about image screening is that it reduces grist for the mills of sites such as PubPeer that revel in uncovering improper image manipulation. They can say nothing about effects on the quality of science, which fundamentally requires measuring reproducibility.
    While it would be nice if possibilities 1 and 3 are predominant, I am skeptical and remain concerned about the possibility that the predominant effect of image screening is to force dishonest scientists to become more careful about how they commit their crimes against science.

  3. A fourth possibility is that dishonest scientists avoid journals with extensive checking, especially after a first warning. That’s good from the journals’ point of view but doesn’t entirely address the bigger problem of fraud.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.