Retraction Watch

Tracking retractions as a window into the scientific process

Don’t trust an image in a scientific paper? Manipulation detective’s company wants to help.

with 2 comments

Mike Rossner. Source: S. Peterson

Mike Rossner. Source: S. Peterson

Mike Rossner has made a name for himself in academic publishing as somewhat of a “manipulation detective.” As the editor of The Journal of Cell Biology, in 2002 he initiated a policy of screening all images in accepted manuscripts, causing the journal to reject roughly 1% of papers that had already passed peer review. Other journals now screen images, but finding manipulation is only the first step – handling it responsibly is another matter. Rossner has started his own company to help journals and institutions manage this tricky process. We talked to him about his new venture.

Retraction Watch: What are the primary services offered by your company?

Mike Rossner: Image Data Integrity (IDI) provides consultation for concerns related to image data manipulation in biomedical research.  Clients can include journal editors/publishers, institutions, funding agencies, or legal counselors who want an opinion about whether images (blots or micrographs) have been manipulated and whether the manipulation affects the interpretation of the data.

I have long advocated that journal editors should take on the responsibility of screening images for evidence of manipulation before publishing articles in their journals, and there are now vendors who provide screening as a service.  IDI does not offer systematic screening at the journal scale but offers consultation in cases of suspected manipulation.  This can include examination of the data in question and/or advice on how to proceed with an investigation, such as when and how to request original data, and when and how to contact a journal, institution, or funding agency.  IDI is willing to undertake that communication on behalf of a client if they so choose.

Labs, departments, or institutions might be interested in IDI’s services in the context of quality analysis of images before submitting manuscripts to journals.  I would hope that this type of screening would not be construed as mistrust of the researchers who did the work and prepared the figures, but instead as an opportunity to educate them about what is acceptable when presenting image data.  To quote from Alice Meadows of ORCID in a recent post in The Scholarly Kitchen, “…a culture of responsibility is not the same as a culture of blame.”  My hope is that IDI can help to foster a culture of responsibility.

RW: What led you to believe biomedicine needed an independent consultant to look into suspected cases of image manipulation?

MR: Although there are others already in this space (such as Alan Price and Jana Christopher), I think there is still an unmet need for expertise in this area.  With more journals screening images before publication and more allegations coming to journals either directly or from post-publication peer-review sites like PubPeer, journal editors may be overwhelmed by the volume of cases they have to handle.  In addition, the in-house editors at some journals may not have the scientific expertise to evaluate the merits of a question raised either through routine image screening or by a third party.

At the institutional level, research integrity officers and investigative committees may look for independent evaluation of allegations they receive.  I have also seen institutional clients follow up an allegation by proactively investigating a whole body of work to detect problems before they might be brought to their attention by an outside party.  Few institutions have staff with the expertise to do this sort of analysis.

RW: Only a few journals systematically screen images. How effective is that screening? And would you like to see more of it?

MR: Since I initiated a policy of screening all images in all manuscripts accepted for publication at The Journal of Cell Biology, a number of journals have instituted systematic screening of images before publication, although some screen only a certain fraction of articles.  I am not aware of any institution that routinely screens manuscripts before submission for evidence of image manipulation, but I hope to find a client who will initiate such a process and begin this trend.

Regarding the effectiveness of routine screening, it is impossible to quantify, because you can’t know how much you are missing.  During my association with image screening at the JCB from 2002 to 2013, we revoked acceptance of 1% of papers because we detected manipulation that affected the interpretation of some piece of data within the paper.  That number remained consistent throughout the years.  I am very glad that those papers were not published in JCB, but, as noted in the pages of Retraction Watch, the system was not foolproof.

RW: Can you briefly summarize the techniques you use to check for image manipulation?

MR: I use the same techniques that we developed at Rockefeller University Press when I was the Managing Editor of JCB.  These involve visual inspection of image files in Photoshop while applying simple adjustments of brightness and contrast.  I described my techniques in an article that I wrote for The Scientist.

When I have original files or films at hand, I compare them directly to the published images to see if the published version accurately represents the image in the file or on the film.

RW: Do you have any clients yet?

MR: Yes.  IDI has had clients who asked for an opinion about specific figures and clients who asked for screening of a whole body of work, in which they suspected there might be manipulated images.

RW: Who do you expect will become your primary clients? Individual whistleblowers, journals, institutions?

MR: I expect all three will be clients of IDI, although I hope it will be more institutions and journals who will avail themselves of IDI’s services in a pre-emptive manner to prevent inappropriately manipulated images from getting into the scholarly literature in the first place.  I believe that the long-term benefit of such efforts to enhance the reliability of the literature for the biomedical sciences community would be well worth the initial investment.

RW: Budgets are tight. Which departments within those institutions do you expect to have the funds available for these services?

I hope to hear from any department – cell biology, molecular biology, biochemistry, biology, etc. – that generates the type of image data that I can analyze.  Regarding budgets, in the long term, I hope that funding agencies like the NIH will put some money into grants to back up their calls for improved reproducibility.   In the short term, I hope that institutions will appreciate the value of taking measures to reduce the chance of publishing questionable research.  I don’t necessarily think that screening images before submission will reduce the number of cases an institution will have to investigate, but potential value arguments for pre-screening include:

1. Ease of investigation.  It may be much easier for institutions to obtain the original data supporting a publication at the time of submission – when a student or postdoc is more likely to still be present in the lab – rather than after publication.

2. Reduction in the number of papers involved in an investigation.  If a potential repeat offender is caught during submission of his/her first paper, the investigation will be much easier than if numerous published papers are involved.

3. Protection of the reputation of the institution if questionable work is never published.  An institution with a publicly besmirched reputation may not attract as good staff, and thus perhaps not as much grant funding.

It should be possible to quantify all of these values.  I hope that the growing awareness of the issue of research integrity (due, in part, to sites like Retraction Watch) will inspire institutional officials to address it upstream of publication with their existing research integrity budgets and begin to quantify the benefits of doing so.

Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our new daily digest. Click here to review our Comments Policy.

Written by Alison McCook

February 24th, 2016 at 11:30 am

Comments
  • John Krueger February 24, 2016 at 2:30 pm

    The remarkable transparency afforded by PubPeer, which in effect is cloud sourcing of image screening, underpins the fact that most questioned images are, by definition, obvious. Indeed, the most common question universally asked at ORI when discussing image cases around the table was “Where were the coauthors or reviewers? The rise in image screening (I decline to call it forensics) raises two broader issues (if not more)

    I. Scientific images appear in many forms, all of which identify them as being unique and that can reveal signs of “inauthenticity.” Once they are made visible, those signs can be perceived. When that happens, it does not take a lot of specialized expertise to determine whether the data are not fully what is claimed (Krueger, What do Retractions tell Us? (https://ori.hhs.gov/images/ddblock/dec_vol21_no1.pdf) Most scientists have that expertise, which is one reason why the forensic tools were developed at ORI. The Tools and Read Me files explaining the principles that I developed are available at https://ori.hhs.gov/advanced-forensic-actions.

    Consider the following: If you start with the general definition that an Image in Science is simply: 1) a visual-graphical representation of data ; 2) from the results of an experiment (as raw or primary data); 3) which is reproducibly recorded by an instrument or device, and 4) has intrinsic features that can reveal its uniqueness and/or lack of authenticity. (Krueger; AAAS-ABA presentation 2013; )

    Thus and any self respecting scientist who profess to believe in data ought to be able to weigh it’s merit, whether it is a blot, FACs plot, confocal image! Is it a sign of our times that some institutions simply don’t want to take the time or the responsibility for expressing what is simply a scientific opinion?

    II. The real issue to the integrity of science is what is done about whatever is found by private consultants? At a time when the Journal of Cell Biology (JCB) once reported it had uncovered between 28 -33 examples of serious image manipulation, I recalled that ORI had only one case reported to it by institutions that involved JCB. Mike Rossner himself reported finding problem image that had been caught by JCB was subsequently published by another journal. (That remarkable level of under-representation may have been due to JCB’s [then] announced practice notifying only the corresponding author. Evidently, that practice served the interest of the journal but not that of the research community. (I think JCB been subsequently been enlightened.)

    Whatever the image consultant finds is legitimately the property of the employing entity, and it is rightfully the latter’s prerogative as to what to do with the results. My concern is whether the rise of consultants to do image screening (as opposed to detailed forensics and a review of evidence in the research record) will allow scientists to abrogate their responsibility to make these tough judgments?

    John Krueger

  • Mike Rossner February 24, 2016 at 3:37 pm

    Hi John,

    Thanks for your comments. For the record, the policy when I was at JCB was to inform the institution if the corresponding author did not take responsibility for the problem and indicate that steps had been taken to prevent it from happening in the future. I do not know what the current policy is. It would be interesting to hear from journal editors on this point.

    Regarding the responsibility of scientists, I think scientists on an investigative committee are being more responsible by engaging the services of a consultant rather than abrogating responsibility. It is likely that the consultant, who has experience using the (albeit simple and universally available) analysis tools will detect more problems than a committee member with less experience and less time. The consultant can make a recommendation on how to proceed if problems are detected, but the scientists on the committee will have to make the final judgement based on all of the evidence available to them.

  • Post a comment

    Threaded commenting powered by interconnect/it code.