Retraction Watch

Tracking retractions as a window into the scientific process

What should a journal do when a scientist who committed misconduct submits a new paper?

with 6 comments

Chris Surridge

In December of last year, Chris Surridge found himself in a situation not uncommon to journal editors: A researcher who had been found to have committed misconduct had submitted a manuscript to the journal Surridge edits, Nature Plants. Retraction Watch readers may recall the name of the corresponding author, Patrice Dunoyer, who has had five papers retracted and five corrected following an investigation into work out of the Olivier Voinnet lab.

So what to do? The journal accepted the paper in May, and published it in June, along with a thoughtful editorial likening prizes and cheating in science to those phenomena in sports. The editors, according to the editorial, “treated the study we received as we would any other.” We asked Surridge to answer a few questions about the episode.

Retraction Watch (RW): Was there some internal debate about whether to consider publishing a paper by Dr. Dunoyer? Did you personally hesitate? Why or why not?

Chris Surridge (CS): It isn’t in line with our confidentiality policies to discuss the review of specific papers. But I can say that there is always an internal debate among the editors about whether to submit a paper to peer review and whether to ultimately publish it. There are lots of things to consider not least the broader context of the work. What is not part of that decision though are the identities of the authors. Studies are not considered because a ‘big name’ is on the author list, and neither should they be rejected for similar reasons.

RW: You note that it has significant supplementary information (SI). Is it more than the usual papers you publish? If so, what is the purpose of including so much additional supplementary information?

CS: This paper does have extensive SI but it is the nature of these kinds of studies to produce a lot of supporting and control data. We strongly believe that these data should be available to readers and by far the easiest way to achieve this is through SI. At the very least these data must be available on request. Equally when published figures are assembled from multiple experiments it is vital for transparency that this is made clear and the raw, unedited images are accessible. Again SI is by far the easiest way to achieve this. We encourage all our authors to make use of SI this way and consider that it should be the norm rather than something unusual.

Full details of Nature Research Journal policies on Image Integrity and Standards are available online at http://www.nature.com/authors/policies/image.html.

RW: Was the paper subjected to more internal and external review, given Dunoyer’s history?

CS: Again I can’t discuss the review of specific papers. This paper went through two rounds of review, during which it was seen by four reviewers, which is fairly typical of papers that we publish.

It is a journal’s duty to safeguard the scientific accuracy of the published record and ensure that an author’s work commands the highest possible level of trust. To achieve that it is important to put all papers through a high level of scrutiny. We owe it to our authors, whoever they are, to rigorously assess the work they submit to us so that when it is published readers can trust in its accuracy, transparency and reproducibility. But it is not our role to investigate scientific misconduct or determine appropriate sanctions in respect of it. We must be alert to the possibilities of inappropriate data manipulation, plagiarism and other forms of scientific misconduct; question possible occurrences; correct them if we can and where appropriate report them to the proper authorities, usually the authors’ institutions.

RW: Why did you decide to publish a lengthy editorial regarding your decision to publish the paper?

CS: I didn’t really mean my editorial to be about the decision to publish Incabone at al. Over recent years the Nature Research Journals have been updating our policies to increase the transparency and reproducibility of the papers that we publish. This is an on-going process but we had just made some specific changes including the introduction of a Life Science Reporting Summary—a checklist detailing specific information about the experimental design, statistical and other treatments of data, and its availability—to be published with every paper. Our publication of Incabone et al. provided a useful hook upon which to hang discussion of the wider issues of transparency, reproducibility and a journal’s role in ensuring those for authors and readers alike.

RW: What has been the response by the community to your decision to publish Dunoyer’s work, and your accompanying editorial?

CS: As far as I am aware there hasn’t been very much response from the community; at least not much has reached me. There was a small flurry of activity on social media at the end of July and I’m not at all surprised that some people had strong reactions. On the other hand the feedback that I have received directly has been generally supportive of our position. Overall I’m glad if my Editorial has sparked some debate about an important issue.

RW: Anything else you’d like to add?

CS: I’d just like to thank Retraction Watch for asking the questions that many people would want answered and so giving me the opportunity to explain a little about the role and responsibilities of a journal editor.

Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our daily digest. Click here to review our Comments Policy. For a sneak peek at what we’re working on, click here.

Written by Alison McCook

August 24th, 2017 at 8:00 am

Comments
  • John H Noble Jr August 24, 2017 at 9:20 am

    Bravo! “. . . a Life Science Reporting Summary—a checklist detailing specific information about the experimental design, statistical and other treatments of data, and its availability—to be published with every paper” as well as the publication of supplementary information containing the original data or providing guaranteed access by others to the original data makes sense. Would that all other journals adopted a like policy to minimize the moral hazard of dishonest production and publication of empirical research.

  • Carol Shoshkes Reiss August 24, 2017 at 9:40 am

    As Editor-in-Chief of a peer-reviewed journal, the decision is mine. With the cooperation of the web staff, authors of plagiarized studies or of studies with digital manipulation of artwork submitted to the journal are flagged in the system. Publication is a privilege, and we will not consider new submissions from flagged authors. There are hundreds of other journals they can try. We do not note evidence from other journals (or sites like RW) with respect to this scientific misconduct, only local acts. Other editors in the publisher’s family of journals are not notified. However, when possible, the provost of the offender’s institution is notified and asked to determine if this behavior is an aberration or common.

  • misha August 25, 2017 at 3:15 am

    Excuse me for being direct, but it is a typical case of:

    ‘It’s kosher, but it stinks’

    and the time will tell about the ‘kosher’ part.

  • Stu August 25, 2017 at 10:59 am

    I strongly disagree with the attitude taken by Surridge in this affair. It is trivially easy to fake, say, a Western blot to show whatever it is you want to show, without any possibility of detection by anyone else. Ultimately, it is only the trust between scientists, and a scientists reputation which allows the publishing world to function. We do not require witnesses to every action in loading or preparing a protein sample for loading, and nor should we ever; it would be unworkable.

    When we talk about cosmetic fakery, cut and pasting, duplications, rotations and scaling, we are talking about the stupidest, laziest and most obvious, and therefore most detectable form of image fraud. But this can only be the tip of the iceberg, and hidden beneath the waterline I suspect there is a mountain of better disguised fake data in publications, presentations, and most certainly grant applications.

    All we have is trust in other scientists to allow us to build upon the work of others – when someone has broken that trust, I cannot ever again believe their research, or waste what little professional time I have to follow them in potentially fruitless, baseless research paths.

    The big difference with cheating in sports is that a doping runner does not cause other runners to waste their professional lives and finite funding re-running a bogus race.
    Such an individual cannot simply rejoin the field and be treated the same as everyone else.

  • L. Burke Files August 26, 2017 at 11:22 am

    As an expert in the due diligence process when you have a known higher risk event or person, you do not pass a blanket policy of avoidance “zero tolerance”. “De risking” a business or a journal is impossible – other than shutting it down The proper approach is to require to a greater degree the appropriate proofs and validation of claims made for higher risk authors.

    It would be nice if all the businessmen, financiers, and scientists were honest – but then I would have no work.

    The article and comments are excellent.

  • Wim Crusio August 28, 2017 at 12:26 pm

    I would be interested to learn what COPE has to say about submissions from authors with a history of multiple retractions (and multiple corrections, in the current case).

  • Post a comment

    Threaded commenting powered by interconnect/it code.