Lifted figure prompts retraction of Oncogene paper by Roman-Gomez

onc_cimageThe journal Oncogene has retracted a 2005 paper from a group led by Jose Roman-Gomez, a Spanish researcher who appears to be a serial image manipulator/misappropriator.

The article, “Promoter hypomethylation of the LINE-1 retrotransposable elements activates sense/antisense transcription and marks the progression of chronic myeloid leukemia,” was published online in September 2005 and has been cited 106 times, according to Thomson Scientific’s Web of Knowledge.

Now comes this:

The authors wish to retract the November 3, 2005, paper cited above. Figure 6a in the published manuscript was inappropriately reproduced from figure 1 in a previously published paper from Rosas SL, Koch W, da Costa Carvalho MG, Wu L, Califano J, Westra W, Jen J, Sidransky D. Promoter hypermethylation patterns of p16, O6-methylguanine-DNA-methyltransferase, and death-associated protein kinase in tumors and saliva of head and neck cancer patients. Cancer Research 61, 939–942, 1 February 2001. The manipulation of the figure was performed by Dr Roman-Gomez. None of the other authors were involved in or were aware of these events. The authors apologize to the readers, reviewers, and editors of Oncogene for publishing this erroneous image. All the authors agree to the retraction except Dr Roman-Gomez.

In two previous expressions of concern, the Journal of Clinical Oncology has allowed Roman-Gomez’s group to keep its papers despite the questionable figures. But Oncogene evidently felt the best solution in this case was retraction — a decision with which we’re inclined to agree given the history here.

We tried to reach Oncogene’s editor for comment and will update this post if we learn more.

We spoke with Sidransky, who was surprised to hear about the purloined figure — although it wasn’t the first time the Hopkins researcher has been on the receiving end of such a misappropriation.

Perhaps, we wondered, he might consider making his own figures a little less attractive…

0 thoughts on “Lifted figure prompts retraction of Oncogene paper by Roman-Gomez”

  1. I’ve seen re-use of figures from one’s own lab but stealing something from another group is a new one for me.

  2. Yes – it happens – you could call it ‘plagiofabrication’ or just simply ‘poaching’. Roman-Gomez has taken entire images from the papers of at least 3 different groups. It’s virtually impossible to detect (note publication 8 years ago) as the images look perfect – unless you were the one who produced the originals…

    For sure this paper requires retraction, but one could ask wider questions – was ANY of the work in this paper actually undertaken? Do the patients described exist? Put another way, is Roman-Gomez the fall guy for institutional fraud? When did the fraud stop? Surely other unretracted papers must still exist. As no institutional committee is involved, how can anyone know what happened?

    Without answers to these questions, the community should simply view everything from this group was, and is unreliable.

  3. It is interesting to read his answer in a recent interview after receiving a research prize two years ago.
    Q What would you say to your MD colleagues to encourage them to do research?
    JR-G The important thing is to start in research. They say that research is a matter almost ethical, in order to bring the experience of what you do and know for the benefit of other colleagues and patients, a wonderful way to improve clinical practice and a great antidote to exhaustion syndrome at work.
    (Probably getting the figures from other papers was little exhausting)

  4. This got me musing – what if the thief inadvertently stole falsified data from another paper? I’d pretty much define irony that way.

    Reminds me of the transgenic sorghum that (eventually) never was, but that somehow generated strong positive results in subsequent papers.

    1. My previous comment absolutely in no way refers to the situation reported in this blog post, by the way. As I said, just musing on a Friday afternoon.

  5. There are automated tools to identify near-duplicate text; why isn’t there something like this for figures?

    1. Great idea.

      I did try the ‘search using image’ option on Google Images and I did get one hit with these Roman-Gomez poaching examples.

      But generally the outputs are crudely similar e.g. DNA gels pick up black & white photographs of almost anything.

    2. There isn’t. Yale has a database of 1.2 million images, but it didn’t find the SW Lee reuse of images. You have do as the departmental secretary said “look at all the papers”. There isn’t an “app” for it. Sometimes the images are altered, “re-sized” like French fries, stretched, things added to distract your eye, so a computer may have a hard time. It is not the best to of all worlds.

      1. I agree – most cases of image fraud will pass through peer review – how often do we hear the phrase ‘eagle-eyed’ to describe those who find problems?

        Perhaps we shouldn’t be too depressed about the lack of such a tool – if for the sake of argument it existed, and it was applied by journals at the review stage, what would happen? The journal could only point out to authors that their images appeared to be the same as unrelated images. In some cases this might have a positive effect and alert authors until then unaware of the fraud so that the deeper issue as to why images were being duplicated was could be deal with. But I suspect in most cases the authors would simply find another image and resubmit (perhaps to another journal) and be more careful with the fraud next time.

        A relevant additional argument is that most fraud doesn’t involve images but falsification / fabrication of numerical data – this is impossible to spot (although error analysis can help in theory). I imagine, but have no evidence to back this up, that data fraud dwarfs image fraud in frequency.

        Continuing this argument, image fraud is actually a specific marker of a fundamentally flawed approach to science. Put another way, the sections of a paper referring to images aren’t the real problem. For example the figures that Roman-Gomez poaches are often claimed to be ‘representative’ e.g. 10 patient samples out of 500. What’s more important to consider is that if the representative figure is stolen, there can’t be any figures for the other 490 patients i.e. the whole thing is a fiction. So image fraud is a marker of deeper problems. It also completely undermines the review process. Unfortunately, many readers, institutions and journal editors misunderstand this point (perhaps deliberately) and allow authors simply to replace individual falsified images without further ado (as noted above, Roman-Gomez has actually managed to do this on a couple of occasions with other instances of image fraud – incredibly there is still only an erratum for Roman-Gomez et al. J Clin Oncol. 2005 Oct 1;23(28):7043-9).

        If I had time (and no fear for my career) it would be relatively easy to examine a large number of cases of clear image fraud (I imagine there are at least 20-30 stories, with probably over a hundred papers, on the Science Fraud website) and describe how journals handle the problem i.e. do they insist on retraction or do they allow correction? One could write it in non-judgmental language without using the word ‘fraud’ and compare responses in different journals and fields; from what I know JBC would score ‘fastidious’ as they now insist on retraction while Nature would probably come out ‘easygoing’ (I think there have been at least 5 cases where they allowed megacorrections of false images in the last couple of years – who remembers the one about the problem being due to poor record-keeping?).

        If journals and institutions are mishandling and misunderstanding image fraud, which we can at least see as outsiders, you can be sure they are doing the same and worse for data fraud, which I am sure we never hear about. To me that is why image fraud remains the clearest window we have into understanding scientific misconduct.

        1. “image fraud is actually a specific marker of a fundamentally flawed approach to science”.

          “So image fraud is a marker of deeper problems. It also completely undermines the review process.”

          “image fraud remains the clearest window we have into understanding scientific misconduct.”

          Couldn’t say that better.

    3. There is a least one photoshop plug-in to investigate manipulation within individual images (splices, erasures, etc.) Unfortunately the one I found is very expensive. Finding manipulations across publications would require a large database of identified images. I suspect the plaigiarism detection sites built up their databases via volunteer submissions (you submit text, it becomes part of their database for the next person). I don’t know how one would make a comprehensive database of scientific images.

  6. I couldn’t agree more with amw. Terms like “image manipulation” or “misappropriation” can easily serve to hide fraud and fabrication of data. The distinction between mere plagiarism – presenting someone else’s ideas or words as your own, and copying data or images, which involves stating that you did experiments that you didn’t do, is enormous. Retraction notices must start making this distinction. Authors for whom the latter category of misconduct is proven should simply be suspended from science – publication, grants, conferences etc until a thorough investigation of all their work and that in their lab has been undertaken.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.