Mike Rossner has made a name for himself in academic publishing as somewhat of a “manipulation detective.” As the editor of The Journal of Cell Biology, in 2002 he initiated a policy of screening all images in accepted manuscripts, causing the journal to reject roughly 1% of papers that had already passed peer review. Other journals now screen images, but finding manipulation is only the first step – handling it responsibly is another matter. Rossner has started his own company to help journals and institutions manage this tricky process. We talked to him about his new venture.
Retraction Watch: What are the primary services offered by your company?
Mike Rossner: Image Data Integrity (IDI) provides consultation for concerns related to image data manipulation in biomedical research. Clients can include journal editors/publishers, institutions, funding agencies, or legal counselors who want an opinion about whether images (blots or micrographs) have been manipulated and whether the manipulation affects the interpretation of the data.
I have long advocated that journal editors should take on the responsibility of screening images for evidence of manipulation before publishing articles in their journals, and there are now vendors who provide screening as a service. IDI does not offer systematic screening at the journal scale but offers consultation in cases of suspected manipulation. This can include examination of the data in question and/or advice on how to proceed with an investigation, such as when and how to request original data, and when and how to contact a journal, institution, or funding agency. IDI is willing to undertake that communication on behalf of a client if they so choose.
Labs, departments, or institutions might be interested in IDI’s services in the context of quality analysis of images before submitting manuscripts to journals. I would hope that this type of screening would not be construed as mistrust of the researchers who did the work and prepared the figures, but instead as an opportunity to educate them about what is acceptable when presenting image data. To quote from Alice Meadows of ORCID in a recent post in The Scholarly Kitchen, “…a culture of responsibility is not the same as a culture of blame.” My hope is that IDI can help to foster a culture of responsibility.
RW: What led you to believe biomedicine needed an independent consultant to look into suspected cases of image manipulation?
MR: Although there are others already in this space (such as Alan Price and Jana Christopher), I think there is still an unmet need for expertise in this area. With more journals screening images before publication and more allegations coming to journals either directly or from post-publication peer-review sites like PubPeer, journal editors may be overwhelmed by the volume of cases they have to handle. In addition, the in-house editors at some journals may not have the scientific expertise to evaluate the merits of a question raised either through routine image screening or by a third party.
At the institutional level, research integrity officers and investigative committees may look for independent evaluation of allegations they receive. I have also seen institutional clients follow up an allegation by proactively investigating a whole body of work to detect problems before they might be brought to their attention by an outside party. Few institutions have staff with the expertise to do this sort of analysis.
RW: Only a few journals systematically screen images. How effective is that screening? And would you like to see more of it?
MR: Since I initiated a policy of screening all images in all manuscripts accepted for publication at The Journal of Cell Biology, a number of journals have instituted systematic screening of images before publication, although some screen only a certain fraction of articles. I am not aware of any institution that routinely screens manuscripts before submission for evidence of image manipulation, but I hope to find a client who will initiate such a process and begin this trend.
Regarding the effectiveness of routine screening, it is impossible to quantify, because you can’t know how much you are missing. During my association with image screening at the JCB from 2002 to 2013, we revoked acceptance of 1% of papers because we detected manipulation that affected the interpretation of some piece of data within the paper. That number remained consistent throughout the years. I am very glad that those papers were not published in JCB, but, as noted in the pages of Retraction Watch, the system was not foolproof.
RW: Can you briefly summarize the techniques you use to check for image manipulation?
MR: I use the same techniques that we developed at Rockefeller University Press when I was the Managing Editor of JCB. These involve visual inspection of image files in Photoshop while applying simple adjustments of brightness and contrast. I described my techniques in an article that I wrote for The Scientist.
When I have original files or films at hand, I compare them directly to the published images to see if the published version accurately represents the image in the file or on the film.
RW: Do you have any clients yet?
MR: Yes. IDI has had clients who asked for an opinion about specific figures and clients who asked for screening of a whole body of work, in which they suspected there might be manipulated images.
RW: Who do you expect will become your primary clients? Individual whistleblowers, journals, institutions?
MR: I expect all three will be clients of IDI, although I hope it will be more institutions and journals who will avail themselves of IDI’s services in a pre-emptive manner to prevent inappropriately manipulated images from getting into the scholarly literature in the first place. I believe that the long-term benefit of such efforts to enhance the reliability of the literature for the biomedical sciences community would be well worth the initial investment.
RW: Budgets are tight. Which departments within those institutions do you expect to have the funds available for these services?
I hope to hear from any department – cell biology, molecular biology, biochemistry, biology, etc. – that generates the type of image data that I can analyze. Regarding budgets, in the long term, I hope that funding agencies like the NIH will put some money into grants to back up their calls for improved reproducibility. In the short term, I hope that institutions will appreciate the value of taking measures to reduce the chance of publishing questionable research. I don’t necessarily think that screening images before submission will reduce the number of cases an institution will have to investigate, but potential value arguments for pre-screening include:
1. Ease of investigation. It may be much easier for institutions to obtain the original data supporting a publication at the time of submission – when a student or postdoc is more likely to still be present in the lab – rather than after publication.
2. Reduction in the number of papers involved in an investigation. If a potential repeat offender is caught during submission of his/her first paper, the investigation will be much easier than if numerous published papers are involved.
3. Protection of the reputation of the institution if questionable work is never published. An institution with a publicly besmirched reputation may not attract as good staff, and thus perhaps not as much grant funding.
It should be possible to quantify all of these values. I hope that the growing awareness of the issue of research integrity (due, in part, to sites like Retraction Watch) will inspire institutional officials to address it upstream of publication with their existing research integrity budgets and begin to quantify the benefits of doing so.
Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our new daily digest. Click here to review our Comments Policy.