Sleuthing out scientific fraud, pixel by pixel

lars-koppers
Lars Koppers

When it comes to detecting image manipulation, the more tools you have at your disposal, the better. In a recent issue of Science and Engineering Ethics, Lars Koppers at TU Dortmund University in Germany and his colleagues present a new way to scan images. Specifically, they created an open-source software that compares pixels within or between images, looking for similarities, which can signify portions of an image has been duplicated or deleted. Koppers spoke with us about the program described in “Towards a Systematic Screening Tool for Quality Assurance and Semiautomatic Fraud Detection for Images in the Life Sciences,” and how it can be used by others to sleuth out fraud.

Retraction Watch: Can you briefly describe how your screening system works?

Lars Koppers: We write some functions for the programming language R which compare pixels or neighborhoods of pixels between different images or in different areas of one image. In many images, many pixels are identical by chance. But if there are two regions with identical pixels in the same relative position to one another, this could be a sign for a duplicated area. To find those duplicated areas, we shift two images against each other or one image against itself and count overlying identical pixels. Shifts that include more identical pixel pairs than expected could be selected for further examination, because they could be a sign of deleted data.

RW: You ask a pertinent question – how can you detect when something has been deleted from an image?
LK: If I want to delete existing data in images, I have to replace it. A replacement by a monochrome area (e.g. black or white) is too noticeable. An easy way to hide a deletion is to copy and paste some background from another part of the image. Now this background pattern
exists twice: the original and the pasted version, which should hide the unwanted data. If we want to find signs of deleted data, we have to look for duplicated background. With this procedure, we cannot restore the deleted data, but we make the manipulated areas visible.

RW: How did you determine whether your system is effective at rooting out true manipulation, without false positives?
LK: Our algorithms find outlier-shifts, i.e. shifts including “more than usual” identical pixels. Unusual values are not necessarily the result of a manipulation. A next step could be a visual evaluation by an expert. In an (semi-)automated system, the challenge would be to find a threshold with high sensitivity without too many false positives. The important thing is that findings of the algorithm are not a final judgment on whether the image is manipulated.

RW: Other researchers and journals are already screening images for signs of manipulation – how is your approach different from the rest?
LK: We do not offer a complete software solution or consultation. Our algorithms are open source so everyone can use them or implement them in their own scanning routine. Our aim was to get results which can be implemented in an automated routine. That’s why our output is not a
processed image but a matrix including the number of identical pixels for all possible shifts. This in principle allows an automated scanning routine by filtering only those images that include shifts with a suspicious number of identical pixels. The challenge in an automated process would be choosing the right threshold, because every positive match has to be examined by an expert.

RW: How can other researchers and journals adopt your technique?
LK: Our code is open source. That way, everyone can implement and enhance it in their own scanning routine. Pixelwise comparisons only cover simple copy-and-paste manipulations. They do not work on rescaled images yet. These algorithms are our contribution to a possible toolbox of automated scanning algorithms. Only a variety of different algorithms ensures that all kind of manipulations in images can be found. Developing tools to find manipulated images would have the same problems cryptographs have: Every tool can also be used for optimization of “the other side.” For instance, those who manipulate images can use each algorithm to ensure that their manipulation cannot be found by this algorithm – so having as many tools at your disposal gives you the best chance of catching manipulations.

Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our daily digest. Click here to review our Comments Policy. For a sneak peek at what we’re working on, click here.

12 thoughts on “Sleuthing out scientific fraud, pixel by pixel”

  1. Bravo! More of this sort of thing please!

    Of course, having a tool and actually getting journals to use it, are two completely different challenges. As I’ve said before, the existence of free plagiarism software for several years does not seem to have persuaded the bulk of the publishing industry to take advantage of this tool.

    But here’s a problem… the code is “open source” but I don’t see a link. Also the paper itself doesn’t appear to contain a URL for github or another code repository. Koppers’ website also doesn’t seem to contain a link (but I don’t do German very well and maybe I missed it). So, how exactly does one go about downloading and running this code? It seems disingenuous to talk all about a new tool that’s open source, without actually providing people with sufficient information to take advantage of it.

    1. From the linked paper: “The algorithms are all part of a newly developed R […] software package FraudDetTools, which is available from the authors.”

  2. One novel approach Kloppers et al introduced in their paper was the use of “test images.” I had created some sample “teaser” case images long ago for the ORI website, with the fantasy then being that there ought to be a standardized image repository for rationale testing the effectiveness of any new forensic approach for looking at questioned research data. (The idea sprung from the first TV test patterns I watched, waiting for daytime television to start ;-)). Kloppers et al are the first to apply one for its intended purpose. ORI, or perhaps NLM, NIST, ImageJ website, PubPeer, . . . who knows, ought to such host an expanded repository. Such test images ought to range from continuous tone blots to discrete features such as FACS scattergrams. The biggest new challenge, based on recent experience, will be assessing colors in immuno-fluoresence data. Brookes is right about need for access.

    1. Indeed, test images are a great idea. In particular, I wonder how well the software can still identify pixel identities after images have suffered degradation due to jpeg image compression (which is all too common in scientific literature).

      For a PubPeer discussion, I had created an image in which a single gel lane was reproduced 5 times, but jpeg image compression introduced subtle pixel differences among the copies. The point of the exercise was to prove that subtle non-identities between images do NOT demonstrate independent origin, and that instead one should apply Krueger’s criterion of whether they are “too similar to be different” when assessing duplications. Related to the current discussion, I wonder if LK’s software would have difficulty finding duplications after compression-induced degradation or if it is robust to such errors. So, perhaps such purposefully-made examples should be considered for inclusion among test images.

      Here’s a link to the above-referenced test image, and to the PubPeer discussion:
      http://i.imgur.com/HxiR85F.jpg
      https://pubpeer.com/publications/43D229CE50CAC900509F635F611EBA#fb43507

      -George

      1. Your nice example (jpeg compression producing a ‘false negative’) introduces a vety important ‘flip-side’ question that is rarely asked: Why don’t (more) journals adopt some minimal image standards, some criteria for the quality of the data thst they accept for publication, and then necessarily ‘modify’ via their printer? As a first try, a simple standard might be “can the reader see ‘features’ that would, to the ordinary scientist, be identifiably unique?” Some journals do publish quality images, others don’t. Images reveal a journal’s standards. (I thought CBE had some advice on that?)

  3. “If we want to find signs of deleted data, we have to look for duplicated background. With this procedure, we cannot restore the deleted data, but we make the manipulated areas visible.”

    Okay, so if you want to paste some background, the original image should be sufficiently large, then you crop it to 50% which will be submitted, and paste background from what was cropped out, so there will be no duplication. Is it possible to still detect a ‘break’ in the pixel pattern that this will cause?

    1. A break in the pixel pattern is detactable in principle. We choose another way, to avoid problems with jpeg compression which also produces pixel breaks.

    2. The condition depends on the constraints: In the Science publication in ORI’s Deb finding, the basis for Figure 3G, H, and I, was falsely created in part by moving blastocysts away from the center of the embryo towards its periphery. On the originals, autocorrelation in ImageJ (or with Photoshop & Reindeer Software) of the empty patches identified other areas of storm that should never have been the same.

      One interesting application remains when features are thought to be missing. An interesting test might be if you could detect signs of a missing band (i.e., specifically, one that had been eliminated by using “Content Aware Fill”)? I could do so, but only by gut labor, in one sample image created by (and given to me by) Eric Pesanelli (FASEB), but I thought that surely there should be a way to automated that?

  4. Somehow I fail to understand where exactly the novelty of the approach is? The literature and the web are full of image manipulation detection tools, some of them are based on autocorrelation which should be sufficiently similar to the described algorithm. Though a clever implementation might be interesting. However, if you are looking for a tool to test some teaser images or pubpeer examples, there are plenty of solutions out there. (As prior posters pointed out: there is no software linked on Koppers’ homepage and I do understand German.)

    To convert publishers to finally use it is a different ask as outlined above.

    1. A positive result for a “one-off”, taken from an arbitrary source not created with mind towards standardization, is not a very scientific way of establishing the validity of a forensic approach. Would it not be better to have some standard test images, designed for that purpose, not unlike the old video test patterns?

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.