Retraction to appear for beleaguered plant researcher Olivier Voinnet

Olivier Voinnet
Olivier Voinnet

Olivier Voinnet, a researcher at ETH in Zurich who has corrected a number of his papers following critiques on PubPeer dating from late last year, is retracting a 2004 paper in The Plant Cell, according to the journal’s publisher.

Voinnet, the winner of the 2013 Rössler Prize, is a high-profile scientist, and scrutiny of his work has only grown since the initial revelations. In an unusual move, the journal and its publisher, the American Society of Plant Biologists, put out a press release about the situation today. Here’s the statement:

In response to recent inquiries concerning a publication by Dr. Olivier Voinnet in The Plant Cell, the American Society of Plant Biologists and The Plant Cell release the following statement:

We confirm that one of three reviewers of the original submission of the Dunoyer et al. 2004 paper in The Plant Cell [“Probing the microRNA and small interfering RNA pathways with virusencoded suppressors of RNA silencing” by Patrice Dunoyer, Charles-Henri Lecellier, Eneida Abreu Parizotto, Christophe Himber, and Olivier Voinnet (Plant Cell 16: 1235-1250)] voiced concerns about veracity in the manuscript. We are grateful for this reviewer’s diligence. The then Editor-in-Chief of The Plant Cell followed up on the charges made concerning the declined manuscript.

The corresponding author, Dr. Olivier Voinnet, provided a detailed response to the reviewer’s allegations which, at the time, satisfied both the Editor-in-Chief and the Co-Editor. A new manuscript was later submitted and accepted after peer review.

We note that no one representing The Plant Cell or its staff visited Dr. Voinnet’s laboratory in the context of this matter. We also note that release of confidential reviews is counter to our policies. No one representing the journal disclosed or discussed the reviews or the names of reviewers with the authors of the paper or with anyone else, outside of journal staff and editorial board members charged with handling the assessment of the manuscript.

Dr. Voinnet contacted the journal editors on March 27, 2015, and requested a retraction of the above-mentioned Dunoyer et al. 2004 publication. The editors have requested further information from the authors in order to complete this request.

Some background: The “no one representing The Plant Cell or its staff visited Dr. Voinnet’s laboratory in the context of this matter” would seem to be responding to a comment on PubPeer last week posted by Vicki Vance of the University of South Carolina. After confirming with Vance that she did in fact leave the comment, we’ve been trying to confirm some of its details, one of which was that The Plant Cell visited Voinnet’s lab “to check for discrepancies on the reported data.

The journal’s statement doesn’t mean that no one from The Plant Cell visited Voinnet’s lab, of course. As Rich Jorgensen, who was editor in chief of the journal when Voinnet published the paper, but was speaking generally because he has not been editor since 2007, told us last week:

It is conceivable that an editor who happened to be handling an article might visit the author’s lab for other reasons, such as a seminar visit, and that the handling editor and author discussed revisions to a paper that had received a decision from the journal, if the author brought it up with the editor, but that is not something the journal itself would seek to do formally; it would simply be an informal interaction between the author and editor, similar to what is often done by phone or email to help an author better understand an editor’s decision and expectations for revision or resubmission. Perhaps such a conversation occurred in this case.

Vance, one of the original reviewers of the paper, wrote that she spotted a “number of problems” with the 2004 Plant Cell paper – about ways plant viruses try to suppress RNA silencing — after reviewing it three times for three separate journals.

Among the problems was the control for Figure 2. He had crossed an RNAi line to a bunch of lines expressing different viral suppressors of silencing, so the control should have been the RNAi line crossed to a WT plant, but he had used the homozygous RNAi line as a control. I said that in my review.

The paper was rejected from Genes and Development, Vance said, but then she was asked to review it for EMBO Journal.

It was then that I noticed that he now claimed that the control for Figure 2 was the RNAi line crossed to a WT plant (the control I said it should have been in the previous review). However, the northern blot was the exact same blot (I still had the G&D version on my computer and it was exactly the same). So it was clearly a lie. I said as much in my EMBO J. review…

EMBO Journal editor-in-chief Bernd Pulverer tells Retraction Watch:

I am aware of comments on PubPeer in the name of Vicki Vance. While we have a well established transparent peer review process at The EMBO Journal, we do not share reports or details on manuscripts that are not published for reasons of author confidentiality. I can therefore at this time neither confirm nor deny these comments.

Vance said the paper was again rejected from EMBO Journal, and she received a request to review it again for Plant Cell. Figure 2 had been changed to its original version, but Vance noticed other problems with “Probing the MicroRNA and Small Interfering RNA Pathways with Virus-Encoded Suppressors of RNA Silencing”:

The paper still had issues among which was the report that they had 7 independent homozygous HC-Pro lines. I noted that no other labs had been able to get a homozygous HC-Pro line (we had tried pretty hard) and yet, somehow they had gotten 7 independent ones. I said I didn’t believe it. When the Plant Cell paper was published, they now said none of the lines were homozygous. So basically, I don’t believe anything from that lab since that time.

The paper has been cited 332 times, according to Thomson Scientific’s Web of Knowledge. We’ve been trying to reach Voinnet for comment since last week, and will update with anything we learn.

Update 12:30 p.m. 6/3/15: The retraction is official; read the full notice here.

Hat tip: Leonid Schneider

Like Retraction Watch? Consider supporting our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, and sign up on our homepage for an email every time there’s a new post.

59 thoughts on “Retraction to appear for beleaguered plant researcher Olivier Voinnet”

  1. Prof. Voinnet, after you retract the paper, please be sure to initiate the next step of the literature clean-up process: requesting the 332 papers that referenced your study to issue an erratum.

  2. If you want to be sarcastic, you could ask: what do we learn from this story? Researchers should look for a field of research wide enough to have a minimal risk of getting the same reviewer over and over again upon resubmission……

    Rather shocking that it took 11 years to come to light…..

  3. From Vicki Vance’s PubPeer statement: “Finally, I found out some short time later at a conference that Voinnet’s lab had been visited by Plant Cell to check for discrepancies in the reported data (this I learned from Olivier himself)”
    Whether or not anyone from Plant Cell visited Voinnet’s lab on this matter, as the journal now fiercely claims: Vance was only reporting what she was allegedly told by Voinnet. If or why he told her this, is another matter, but the journal shouldn’t have been so inappropriately battling her statement.

  4. Please not the date Voinnet allegedly contact Plant cell about retraction: March 27, 2015 Vance’s comment appeared on PubPeer on April 1st. So was the retraction already decided, without her (much criticised by colleagues!) public discussion of the peer review? I personally find this hard to believe.

  5. As a plant scientist who’s done a lot of reviewing/editing (I’m currently a member of the EB of four mid-tier journals, and EiC of a low-tier one), I’ve been watching this story with sadness and disbelief. It’s like watching a train wreck in ultra slow motion. I know some of the people involved (although I never met OV), so it doesn’t surprise me that it’s starting to get personal.

    1. madscientist, I wonder what policies are in place in the four journals to which you serve as an editor in terms of post-publication peer review, correcting the literature and issuing errata? Very, very few plant scientists are willing to join the fray and examine the weaknesses of the literature, so any hints at the problems would be useful. Without naming actual publishers, or journals, or getting personal, could you please provide a bit more perspective because, in my opinion, also as a plant scientist, the situation in plant science is really bad, and the OV case is simply one of those white turrets of the ice-berg that breaks through the water’s surface…

      1. All journals (by three different publishers) have “standard” policies to handle misconduct – immediate rejection if caught during the review process, and retraction if proven after publication. Now, it could be luck, or (more likely) the fact that they’re not high profile journals (IFs between 1 and 2), but I never had a case of misconduct after publication. All cases that I handled happened during review, and lead to immediate rejections with letters being sent to all authors as well as to the director of the institution of the corresponding author. The most recent one was a double submission case, which we caught by sheer luck as the same person was asked to review both manuscripts… I contacted the editor of the other journal and we both rejected the manuscripts and sent similarly worded letters (one thing that I learned from reading RW is to be as specific as possible in these letters, but that’s just me, not the journals’ policy). So, bottom line, I agree with you – what we catch is only the tip of the iceberg. Depressing to say the least.

      2. One final comment: at least in my experience, there is no common thread in these misconduct cases. I’ve handled plagiarism, double submissions, authorship disputes (no sheer data fabrication, afaicr). Authors are from different countries (developed and developing, with strong and weak scientific cultures) and with different track records. OV draws so much attention because he’s so high profile, but it happens at every level.

  6. What this story should illuminate is that certain elite scientists are treated very, very differently by the top rank journals.

  7. How will The Plant Cell treat, handle, process and comment on queries that appear at PubPeer? Will the process be open, transparent and debatable? For example:
    Antignani et al. (2015)
    The message by the new EIC (Dr. Sabeeha Merchant) deals only with superficial issues and manuscript processing and functionality, but absolutely no truly revolutionary vision for how to protect the integrity of the Plant Cell literature moving forward into the era of scrutiny and retractions:

      1. No .edu or similar academic imprimatur is required to register for full access. I used my (more stable over time) yahoo address.
        I encourage you to register. Sometimes researchgate can resemble the Wild West of Woo, but at other times there’s genuinely interesting pre-pub stuff going on in the pipeline.

  8. The report above indeed does speak for itself. But one of the things that it says, quite loudly, is that a paper was approved for publication that was more than evidently not supposed to be approved for publication (and quickly at that, within 3 weeks), although Voinnet is now challenged to release the original data as well as the offical response he provided that dealt with all of Dr. Vance’s concerns. If he does not release this response, then The Plant Cell must release that response. Not only is there doubt, and distrust, about Voinnet, there is also now distrust about the apparent editorial oversight, if not abuse, at The Plant Cell. Peer review apparently was working just fine, but editorial oversight, or worse, seemed to have stepped in. A full-blown post-publication peer review of the entire Plant Cell literature is required. This will require that specialists form all fields of plant science set aside of their free time to pick up papers that they understand, even if only parts of it, and assess whether errors exist. These must be publicly reported, for example at PubPeer. This is an urgent issue. We cannot expect the fox to keep eye over the hen-house.

  9. What we really need is “pre-pubpeer” for dissemination and discussion of peer review materials for grants and papers. That really would be a legal minefield.

  10. madscientist, I understand that you appear not to wish to comment further, but our discussion could be important, at least for the plant sciences. Hard-core cases are in some ways easier to deal with, because they may be more crystal clear. For example, a duplicate paper.

    But what about more subtle issues? For example: a) several data points that may be identical between two papers (including proceedings); b) recycled controls in gels without declaring this; c) splicing in older papers; d) low-level “similarity” (3-10%, depending on the journal); e) minor errors that do not alter the conclusions drawn, but that are errors nonetheless.

    These are issues that might not necessarily require a retraction, but they most certainly would require errata (or in more serious cases, expressions of concern), to alert the readership that there are issues that are unresolved.

    In your opinion, as editor, how would you handle such cases, or would you leave the literature uncorrected for such “frivolous” issues?

    We need much more public discussion of these issues, even if anonymously.

  11. The Plant Cell does have pretty good standards for checking submissions, though of course some things are missed. I don’t think they have any automated way to look for spurious images (not sure any journal has in fact). I recall one paper I submitted, there was a post-acceptance “scientific review” looking for details on statistical analysis, and specifically proof that ethidium bromide-stained gels were in the linear range for quantification (I did not have this, but it wasn’t really important in my case). Certainly, at a flat rate of $2000/article submission charges – which is a lot considering the move to electronic-only distribution – they should have pretty robust systems in place to catch errors. Sadly they are likely to be clerical or procedural in nature, looking for typos, checking whether all error bars have sample sizes associated with them etc. Not looking for evidence of scientific misconduct. I agree The Plant Cell should be leading the field in this arena.

    1. Other things in this press release are disturbing, like the statements:

      “the studies’ findings are not in doubt” before the investigation is completed.


      “”Olivier Voinnet is a scientist whose outstanding research findings have been confirmed repeatedly by other research groups,” says Günther.”

      If data and figures turn out to be manipulated, confirmation cannot make a paper kosher in retrospect.

  12. Maybe Prof Günther spoke too generally.
    Voinnet’s “outstanding research findings” have NOT ALWAYS “been confirmed repeatedly by other research groups”. Anne Simon now told Le Monde (and also me) about her experience:
    The paper is question is heavily criticised on PubPeer

    1. I want to thank Leonid for his reporting that helped to publicize this truly sad story in a field that I love and have devoted my professional life to. I also want to clarify that although we showed (and published) that at least one of the mutants in this Science paper does not have the properties that were reported, this could very well have been an “honest mistake”. There do seem to be, however, other issues with this paper that are disturbing. What has bothered me over the years, is that Dr. Voinnet continues to present these mutants as being “correct” in both talks and in print.

  13. Apparently CNRS also thinks the integrity of figures has nothing to do with that of overall results.
    “Irrespective of the works of this commission, the CNRS notes at this stage that these public allegations referred to the presentation of certain charts/diagrams but that, to its knowledge, no declaration has challenged the overall results obtained by Olivier Voinnet and his colleagues on the role of small RNAs in the regulation of gene expression and antiviral response — these results having been confirmed on several occasions, whether using the same or other material, by various teams worldwide.”

  14. Leonid, what are you suggesting? That the ETHZ and CNRS reports are biased, incorrect, or misleading? Except for this one paper, allegations made at PubPeer (or issues raised there) have neither been proved nor disproved by any party, despite guarantees having been given there in January that original data would be presented, either by Voinnet himself, or by several of his co-authors. According to your own report at Labiournal, you indicate that formal reports might only come out in May, so perhaps what is required is a bit more patience, and time for all parties to present their evidence and counter-evidence?

    1. Only criticising the disastrous formulation, which they could have done without. But apparently someone thought it was important to be made clear, that Voinnet’s findings are absolutely not being questioned.

    2. I’ve seen this point made a few times (= Voinnet’s work has been reproduced by others, so things aren’t that bad). Couple of points:

      First, tell that to the participants in the mammalian antiviral RNA debate.

      Second, a lot of biological concepts are actually pretty easy. Most of the conclusions of papers in Plant Cell can be expressed as ‘this gene turns on that one’. Crick et al. won their Nobel for saying ‘Folks: it’s a double helix’. The concepts themselves aren’t usually the hard bit.

      What *is* the hard bit is getting the unambiguous experimental evidence for a concept. I won’t go into details, but it’s usually relatively rare to be able to do a single killer experiment. Instead, the picture’s built up by a ‘decision tree’ of experiments (‘Given this result, and this one, and this one, the most likely conclusion is A rather than B‘). Some experiments are technically a lot more difficult than others and they take a lot of time to do.

      Now, back to the concepts: they’re often *so* easy that people know about them in advance, or at least strongly suspect them. I could sit down right now and make half a dozen important predictions about my field that would very probably be correct, but which nobody’s going to be able to address experimentally for several years, because the experiments are tricky, time-consuming, and – importantly – expensive.

      Which is where frauds of the alleged kind creep in: if somebody were to fake experiments about bottlenecks of that sort, then they’d get a slew of high-impact papers, each confirming that a long-suspected concept was true. That would do several things: first, it’d get them precedence. If their guess was confirmed, they’d forever be touted as the discoverer; if it were disproved, it’d be forgotten so long as most of their other guesses were correct. Second, it’d win them resources over somebody else (grants, jobs, staff, prizes, etc.), because they’d be getting the credit that should have been spread among several other people (credit goes to the first discoverer; if 5 people are working on providing the final proofs for 5 big ideas, each of which takes a lot of experimental effort, and you scoop them all, then you get all the credit). And resources are really important: it’s easy to publish a lot if you have money and staff. All you have to do is hire smart people, and they’ll do it for you. So, if our hypothetical fraud gets lots of resources, their group can actually end up honestly productive and legitimate (= the ‘fake it till you make it’ approach; cf. the British Royal Family). Third, it’d get them a reputation as a brilliant experimenter, which would mean that people would come to them and say ‘I strongly suspect this, but can’t get the experiment to work because I don’t have the resources or technical expertise’. All the fraud would then have to do would be to provide an experiment that supported what was probably a correct assumption, and garner a reputation as a good and productive collaborator who always delivers, (but who had to throw away those critical strains you want to check because of lack of space).

      Tl; dr: if you know what people think is true and prove it for them, you’ll go a long way.

      Finally, I note that if the alleged Fraud were to overreach themselves, there’s always the fall-back of ‘Our results suggest this, but even if they’re not true, what’s important is that this idea is out there and we’re having this debate. Oh, and I’m a visionary thinker’ (cf. Arsenic life). That’s almost always bullshit: most biological concepts are easy and a 10 year old could think through them: we don’t need the debate, we need the data.

    3. P.S. Further to my comment above, stuff I’d really want to know the answer to in a case like this:

      1. Are there any instances where an idea and its first evidence are *unequivocally* from the PI and the PI alone, and where those findings have subsequently been mechanistically essential *for somebody else* to make a big discovery? Not ‘Did the PI provide evidence that confirmed ideas that had been knocking around at conferences/suggested by their supervisor?’ Not ‘Did the PI show that something that we know happens in one organism also happens in another?’ Not ‘Did the PI put out a big idea that’s been cited by another group (and might even have inspired them), but isn’t actually, really, mechanistically *necessary* for the second group’s findings to be correct?’ And not ‘Did the PI end up with a good postdoc/PhD student who did good work off their own bat and that’s fine?’

      2. tl; dr for 1, above: How original, *really*, is the PI’s work? Are they just usually the first to provide evidence for stuff that people already suspect? Or have they ever published *against* expectation, and subsequently been shown to be correct?

      3. Saying work hasn’t been disproven = nothing contradicting it has been published. But it’s long been known that it’s harder to publish results that contradict published data. So, are there any vicious circles here? If the PI has ever published something that was unequivocally their idea and for which only they’ve provided evidence, have people tried to repeat that and been unable to (and therefore not published it)? Because the first assumption in failing to repeat somebody’s work is *always* ‘I must have made a mistake; maybe I’m just technically not good enough to repeat that work’.

      4. Early career scientists (PhDs, postdocs) get about 2-4 years to do something good, or they’re out. *If* the PI did take short cuts, then how many young scientists have wasted a year or two of their allocation in trying to repeat stuff that was never going to work? Would that sacrifice across other groups/countries make up for the (probably legitimate) students that the PI has trained in their country?

  15. As someone who is not the slightest related to this field of research, I wonder how it could be that these alleged manipulations/duplications are linked to only one researcher (OV), acting in different capacities? Unless I am totally mistaken how research works in this field, I would not believe that OV did _all_ the allegedly manipulated figures himself, so either there were always accomplices, or it was taught to junior co-authors (as the first authors of more recent papers) that certain things “were ok” to do.
    I do not want to allege anything here, but on top of every single instance, this as a whole seems unbelievable for me…

    1. First, nobody is accusing OV here, we only raise criticism of the figures his papers, with the expectation of our concerns being addressed. Afterwards, the guilt or the lack of it can be determined. Not addressing is not really helpful, by the way.
      In every single one of these papers, OV had direct access at least to the figures prior to publication.
      One paper is to be retracted, after being criticised 11 years ago by a referee to contain manipulated data. Which in turn strongly indicates the referee was right. Now we can ask, which one of the authors on that paper did the manipulations?
      And so on.

    2. DancesWithGulfs and Pelicans, in reply to your questions:

      1. It’s not unusual for labs to practice ‘divide and rule’, in which junior staff do the experiments they’re told to and pass them to the PI, who then does the stats and writes the paper (this, by the way, is a terrible way to run a lab, mainly because junior staff don’t learn independence). For a multi-author paper, it wouldn’t be too hard for the PI to tweak data from different junior staff and pass it off by telling their lab that they (the PI) did a follow-up experiment in their own time (‘just to get independent confirmation of your findings’). Many people won’t kick up too much fuss if the experiments that the PI ‘did’ threw up slightly different results to the experiments that they did, either because they honestly believe that the PI is a better experimenter, or because they’re scared of them.

      1a. An interesting point that arises from Prof. Vance’s review is that her review is *really good*. That is: it tells the author what’s wrong in so much detail that the author can easily address any omissions. Reviews like that, paradoxically, almost make it easier to game the system, because a deliberate fraud could simply submit a tentative manuscript, get back the reviews saying what changes the journal would like before considering acceptance, and then just make those changes. So you don’t need to be ‘super-human’: you basically just ask the journals what they’ll publish, and then give them what they want.

      2. Yes, co-authors are usually the ones who are best placed to spot fraud, if it occurs. Junior ones shouldn’t be held too responsible (PhD students believe what they’re told to, unless they’re remarkable, and most people aren’t), but co-authors with a permanent position should read the not-theirs sections of an MS as critically as they would if they were reviewing it. But academics are really busy: it wouldn’t be reasonable to expect them to do much above that. There’ll always be some element of trust (= you wouldn’t be collaborating if the other PI didn’t have some expertise that you didn’t, so if they tell you something, then you usually trust them).

      3. Because peer review is *really bad* at catching deliberate fraud. And that’s because it’s *not designed* to catch deliberate fraud: it’s designed to catch incompetence (not perverted competence) and unimportance (‘Is the work presented important enough to go in our journal?’). Also, most reviewers aren’t paid to do it and don’t get rewarded for it, so most aren’t going to check every little detail of a manuscript unless they’ve got a good reason to.

      So, a lot of the answers are basic cost-benefit stuff: most scientists are honest, the costs of making peer review completely fraud-proof would be much of a drain on everybody’s time, so it’s easier to catch fraud post-review.

      Finally, to add to point 3: I’d be wary of over-hyping post-publication peer review as a mechanism for detecting fraud, especially since we don’t know who the original PubPeer commentators were. In many cases in which fraud is detected, the initial whistleblowers have additional information over and above what’s in the paper (e.g. are in the lab, have heard gossip). So, the current case may not be a case of true ‘blind’ post-publication review.

      1. I totally agree that junior staff shouldn’t be treated too harshly on that matter, considering how certain PI can handle work relationships from time to time; actually since Pubpeer original comment is still anonymous, why not imagine a staff member at that time who happens to feel remorseful? I also second that you collaborate with other labs especially because you don’t have their expertise, though I’m not convinced that means you don’t have the ability to discuss collaborators’ findings; but as you say, it’s likely you don’t have time doing so (or don’t want to).
        However in this case there was no collaboration, all authors working at IBMP at that time, and we’re not only talking about young and inexperienced junior scientists since one of the two first authors (PD) had already been hired by the CNRS at that time and then wasn’t a post-doc anymore. My point is it’s probably hard to say with certainty who’s responsible for this fraud but here are my thoughts on that:
        – if OV, who wrote and submitted the paper was being handled fake data, then he probably should have noticed or follow more closely his staff’s work, and hence should be held responsible along with whoever did it (and I’m gonna state I don’t think this is what actually happened)
        – if the corresponding author purposely sliced/edited data he received, who can honestly believe none of the co-author (who did the lab work and produced the original blots) noticed what had been done??
        I’m absolutely not saying it was an easy situation to handle for lab members, which is why I like to believe one of them finally took action years after, but someone should have said something. Especially considering that most of them got hired afterwards, probably taking advantage of (questionable) publication(s) done there. Considering how hard it is to get a job in fundamental research these days, that’s also one of the things that makes me mad in that story.

        1. Yeah, I see your point! I suppose I was trying to say that if there was a case of proven fraud on the scale alleged, then it *could* have been done by one PI, without dragging in others. But I can also definitely imagine a ‘means justify ends’ lab culture where everybody gets so carried away with how fast everything is going that they end up cutting corners.

          And by ‘imagine’ I mean I’ve seen labs like that (though I’ve no experience of OV’s lab, so can’t comment on what his is like).

          Finally, yes, I absolutely agree with you that the thing that would leave some of the worst taste if this *is* fraud would be the knock-on effects for people who’ve been out-‘competed’. That’s obviously one of the reasons why this is such a big case: most exposed frauds have been relatively early-stage (e.g. Dhonukshe), but if OV is, then that’s a *lot* of possible collateral damage.

  16. 1/The whole ongoing process would have been different if a postdoc had been caught doing the same thing (“C’mon it’s just controls !” Yeah sure.)

    2/ETHZ and CNRS reports (content and form) are like : “OK, just because someone has found some potentially manipulated pictures in Paint (“C’mon, it’s just controls after all !) in different papers, we CAN’T say indeed that the whole results from Dr Voinnet’s group are wrong”. However is it reasonnable to think that other unseen potentially manipulated results might exist ? I’m not using the “Where there is smoke, there is fire” argument, I’m just saying there’s a reasonnable ground to doubt…and ETHZ and CNRS reports seem to over-protective of Dr Voinnet’s situation.

  17. DancesWithGulfs, I would say that that is an extremely reasonable argument. A paper consists of a team, so even if, hypothetically, OV were to have super-humanly conducted all the experiments that appear to be at fault, that leaves then three questions unanswered: 1) what did the other authors do exactly? 2) should all co-authors not be collectively held responsible for the content of that paper? 3) why has there apparently been such wide-spread failure by the journals and their editor boards to detect what PubPeer commentators apparently detected? A team, after all, is a team, so when the ship sinks, it takes the whole crew with it (curiously, this applies to both a set of authors, or a team of editors). Responsibility is not something that can be transposed to a single individual. In a team experiment or project, there is collective responsibility of all members having to check each other. Whether accountability and responsibility equate to the same thing in the approx. 35 papers of Voinnet being queried is a whole separate debate, but one well worth having.

  18. Some great points. Some additional comments.

    1. If bad lab practice is the cause for errors, or for a retraction, then indeed, this fortifies my notion that responsibility is shared, even though, in the lab, responsibilities were separated, or handed down. The vertical structure that exists in many laboratories is a strong aspect of lab management when everything goes fine, but when there is a problem, the whole stack of cards comes tumbling down. And all co-authors and their institutes take a serious hit. One might ask, where data from a laboratory is publicly challenged, should the authors stay silent, are they being silenced, or should the institutes they work for come in positive defense prior to an investigation?

    1a. Traditional peer review, even in the world’s No. 1 plant science journal, is neither perfect, nor fail-safe. This can cause deep disappointment. Peer review in many reputed plant science journals are pretty comprehensive, in some cases involving as many as 4 or 5 peers. “Gaming” the peer review is extremely difficult to assess. What is the incentivation of editors and peers, except for the fact that they contributed to the evolution of the literature in their field and assured, to some extent, its quality? As retractions increase, or as the number of cases at PubPeer increases, there may be an increasing feeling among the peer pool that they have wasted their time, and this could shrink the peer pool, causing a further crisis in science publishing.

    2. Trust: key word. Trust is essential and can be lost in a flash. It takes just one case at PubPeer or one negative situation to melt away even years of trust. Being too busy: lousiest excuse for errors even if it’s true. If used as an excuse, does this reflect bad lab practice and irresponsible conduct of research?

    3. PPPR. I agree that it is essential. Unfortunately, the concept is not filtering into the wider peer pool and editorial structures. This might change as more papers from a wide range of journals start to be questioned at PubPeer. Thus, any journal that was perceived to be “excellent” or “perfect” could lose that glorious status when more papers that it published get questioned at PubPeer. This requires the active involvement of the peer pool.

  19. Dear Anne Simon, you state “What has bothered me over the years, is that Dr. Voinnet continues to present these mutants as being “correct” in both talks and in print.”
    This is a very valuable and insightful comment. The plant science community would greatly appreciate if you could explain a few things, to stimulate further discussion:
    a) Your exact relationship with OV.
    b) Why, in as much detail as possible, you believe that these “mutants” are incorrect and what would, to the lay person, be a “correct” mutant. We need to educate the readers and general public as well.
    c) There currently exists no list of oral presentations, or poster presentations that OV made (at least I could not find a list anywhere). It is very important to identify that list, in as much detail as possible (in fact, OV should be responsible for presenting a full list of posters and oral presentations made from his PhD to 2015) because one needs to understand, in addition to the expansion of such ideas in the print medium, what the impact was through presentations and posters. For example, when this TPC paper gets retracted, it basically invalidates the findings of that entire paper (until OV addresses the errors and republishes, if he decides to do so). That would mean, retrospectively, that what he presented in posters and oral presentations (for now about work in this paper) was hypothetically wrong.

    And why is my question important? Because this might be an excellent time for scientists to start asking the following questions:
    1) what is the true cost of a retraction?
    2) what impact does a retraction have on the investments that have been and continue to be made in terms of financial support to attend congresses, and what are the real benefits?
    3) Lab equipment costs money. Chemicals and reagents cost money. Salaries are not low. Grants and funding in high level projects are neither small, nor insignificant. And, in OVs case, there are an abundance of prizes. There are daily costs. And then there are costs that are sustained by any collaborators. Travel to symposia usually costs a small fortune. Thus, collectively, a retraction for a high level paper represents, theoretically, a massive economic loss.

    I truly hope that you may provide as much insight as possible.

    1. Here is my attempt to answer your questions.
      a) My relationship with OV. I have no “relationship” with OV. I work on the virus that he used in that Science paper and I am considered an expert on this virus (Turnip crinkle virus). OV asked me when he first began working with the virus for antibody to the coat protein and I provided it to him. I see OV at meetings that we both attend, the EMBO workshops on Plant Viruses, which meet every three years. At these meetings I have heard him continue to talk about these mutations as specifically only affecting RNA silencing, as well as new mutations (at so-called “GW” motifs) that really are specific for RNA silencing suppression and are located quite far away from the ones that he published in his Science paper (he has published on these others as well). Since I have known Dr. Vance’s story of the Plant Cell review for many years (see below), I do not associate with OV at these meetings.
      b) Why are the Science paper mutants incorrect. I can only speak of one of the two mutations published “M1”. The paper used virus containing a particular amino acid change in the coat protein (CP) and claimed that it only affected the CP’s ability to suppress RNA silencing based on an indirect assay where sap from an infected plant could infect another plant. Since the RNA genome of the virus needs the protection of its coat, they assumed that this meant the CP still made virions (the coat that surrounds the delicate viral RNA genome) and thus the mutation was also assumed to only affect RNA silencing suppression. We needed precisely a virus with this property (which we generated in the lab based on the paper) and were successful at obtaining grant support from NSF to examine RNA silencing as related to a small RNA associated with the virus. A postdoc and a graduate student joined the lab to work on this project. After 8 months of very frustrating results that made no sense, we realized that the only explanation was that the mutant CP was defective in far more than just silencing suppression. So we did a very simple experiment to look for virions (about 1 hours work) and it was very clear that there were no virions made (as I recall, the CP also didn’t have other properties that were associated with the normal TCV CP). We NEVER did any of the experiment in the Science paper (sap-transmission) since there was no point. Why the reviewers of that paper never asked for a direct experiment for the absence of virions, was frankly, shocking. We published this result, that M1 was not correct, among other results, in the journal Virology in 2008. Both the postdoc and the graduate student working on this left the lab after only one year (I don’t blame them) and I had to petition NSF to switch my grant to another topic. The NSF program officer at the time was Dr. Vicki Vance, who I was acquainted with from Virology meetings. After I related why I couldn’t continue with the project, she told me the story of reviewing three versions of his Plant Cell paper, which is now known to most from her recent postings. To say that I was shocked would be an understatement. While her story made me question the Science paper further, I had no evidence, and still have no evidence, that the mutant not having the desired property was anything other than an honest mistake of relying on an indirect assay.
      c) OV, because of his prominence in the field (and he is also a very good presenter), is a speaker (usually a keynote speaker) at numerous meetings including every EMBO Plant Virus Workshop that I have recently attended. What is presented at meetings usually appears in publications within a few months.

      The major problem as I currently see it is that it isn’t clear what is correct and what isn’t in the publications from his lab. RNA silencing has popped up unexpectedly in my current work and we don’t know if we can rely on what is published, which is quite frustrating. The field needs to get this sorted out ASAP, which looks unlikely given the recent postings from ETH and CNRS on their investigations. If anything happened in any of these many papers that appears to have happened in the Plant Cell paper (based on Dr. Vance’s review, that he included results on plant lines that don’t exist among many, many other issues), then this will certainly impact many investigators. What we don’t know is how many investigators made similar plant lines, couldn’t repeat his results, and then dropped what they were doing. It will probably be many years before we know the full impact of this scandal on the field and on the “best of the best” young scientists who passed through his lab.

      1. Dr. Simon, your response is absolutely precious and opens up a new can of worms that perhaps now requires greater in-depth analysis by plant virologists and by RNAi technologists. You have in essence, with your statements, left a massive question mark on one of plant science’s most prestigious fields of study. Your queries are extremely important and you seem to be extremely confident about your claims and doubts. Is there a way to gently encourage plant virologists and other plant molecular biologists to come forth publicly with their experiences related to success and failure of OV’s methods? Perhaps the courage that you and Dr. Vance had in coming forward to relate your opinions may spur others to step forward and share of their experiences.

      2. Just on a personal note: God, that’s horrible (having to petition for a grant switch and the human cost); many sympathies.

  20. I can’t answer A), but I can help with the other two questions.

    B) A mutant is just a strain (of plants, in this case) that has a mutation in a gene of interest. With a good model organism, you can create a strain with a specific mutation in a particular gene: you can then see how that mutation affects the behaviour of the entire plant and, from that, you can work out what the gene does. I’ll spare you the details but, by creating mutants with different combinations of mutated genes, you can piece together how those genes talk to each other to control a trait (say, plant height). Creating mutants (and checking that the mutations are where they should be) isn’t easy – it’s not uncommon for some genes to simply be essential, so that if you mutate them, the plant never develops from the seed – and a lot of scientists make good careers out of sets of perhaps a dozen mutants (examining all the different aspects of how and when the various genes talk to each other under different conditions). A ‘correct’ mutant is simply one that has the mutation that it’s supposed to have (and the behaviour that it’s reported to). An incorrect mutation would trivially be one that didn’t have the claimed mutation or, as is the case that I think Prof. Simon Is referrring to, where it did have the claimed mutation, but didn’t behave as reported.

    C) Nah, I *do* appreciate the idea behind that suggestion, but academia doesn’t place that much weight on talks, so nobody would ask for that kind of record. Basically, talks and posters don’t really count: the standard of proof is much lower for a conference talk than for a paper, which is because the point of a conference talk is usually to present preliminary ideas for discussion/criticism. The paper is the official public record, the conference talk is just a chance to chat. As an example, I usually give about the same weight to conference talks as I do to something my friend would tell me in a bar: there’s a decent chance it’ll be generally correct, but I wouldn’t bet on it. And this might sound odd, but if somebody deliberately lied in a talk, I’d think they were a nasty piece of work, but I wouldn’t expect them to be fired for it: a lot of people hide or obscure details in talks about unpublished work so that they can claim precedence, but not reveal enough details for competitors to rush work and scoop them to publication.

    Papers, otoh, you should be able to bet on. Fake them, and you should be beaten with sticks.

    Finally, re. Economic costs: God knows. However, it’s important to note that even with fraud, a lot of the expenditure won’t actually have been wasted. Depending on the lab culture, there’s no serious reason why a PhD student trained by a fraud won’t be as good as one trained by an honest person. Again, I stress lab culture: you want people who are technically competent, honest as the day is long, and who can think for themselves. If the fraud only happens once the data reaches the PI, then the students won’t be write-offs (although, sadly, their papers might be).

    1. Thank you for your response and insight. The answer to B will go a long way to helping non-specialists understand Prof. Simon’s comment. As for C, I understand that a meeting has much less intrinsic value than a scientific paper. However, what I really meant was, assuming that information is false or incorrect (and here I am not insinuating this about OV or his co-authors, I am simply referring the phenomenon in general), and that such errors would hypothetically be picked up in true peer review in qualified journals, and then either rejected, or retracted, would the presentation of such “false” information not in fact constitute a complete waste of valuable funding? I am specifically talking about an international flight + symposium fees + hotel + meals + entertainment and extras can, per trip, amount to several thousand US$. So, if a speech, or poster, is being made on “bogus” science, then how can one justify the cost (any of it)?

      That is why we need, quite urgently, a complete list of all OV presentations over his entire career. And this is why the outcome of the investigations at ETH and CNRS are going to be absolutely crucial. As I see it, in this part of this complex case, “waste” will only be determined once the following factors are finally conclusive: a) how many of the OV papers questioned at PubPeer have valid errors? b) the findings of TPC and what the retraction notice will state; c) what OV will state publicly, if anything at all; d) the validity of Prof. Simon’s claims and if they are supported by the experience of other scientists in the field.

      1. No problem; it’s really interesting to see the different perspectives here!

        Two things on conference costs for fraud in general: first, the expenses aren’t just to cover a 30 min talk; they also make that researcher available to the other attendees for questions at coffee, etc. So you start to get into questions of what’s a useful interaction: people will often pitch quite broad questions at invited speakers, and there’s often dissent, so it’s completely possible that, say, 90% of the people that a fraudster talks to at a conference will get something perfectly good and honest out of it. Fraudsters don’t lie *all* the time and not everything they do is fraudulent (cf. my earlier comment about PhD training), so the problem becomes Prof. Simon’s ‘when do you know?’

        So, maybe 10-20% waste of conference expenses? But, obviously, it’d be better if there were none. Also, personal disclosure here: if that above paragraph seems to go easy on fraudsters, then, yeah, they’re people too. I think fraud is a terrible thing that should be stamped out; I don’t necessarily think that demonising any and all people who commit fraud is the way to do it (but that’s beyond the reach of this comment).

        Second, even assuming 100% waste, it’s still relatively trivial compared to the other costs. As an example: let’s say 6 big conferences per year @ $5k each and 100% waste. That’s $30k. But Prof. Simon’s 8 month postdoc and grad student would probably come in at, ooo, $40k p.a. Postdoc, maybe $30k p.a. Postgrad, so (40+30) x 8/12 ~ $50k right there, and that’s before all the consumables and associated. It wouldn’t surprise me if those 8 months cost Prof. Simon‘s institute around $100k in total (though I’d probably guess at about $70k).

        So, yes, the costs of fraud to conferences are important, but the focus of any enquiry will be on the publications.

        Also, to finish, I’m not aware of any cases in Europe where an institute or funder has attempted to claim back expenses that have been misspent (though there may be some). European law in general is more forgiving of white collar crime than the US and (the relative lack of) fines to e.g. Business fraudsters reflects that, so it’s less relevant to add up all the costs (though absolutely interesting: I’ve never really thought about the total monetary cost of fraud before).

  21. I would just like to make one additional comment (and a much shorter one than above). I teach Responsible Conduct of Research to graduate students at the University of Maryland. In fact, I had just finished teaching about what comprises ethical and unethical figures, when I read the opinion piece from Leonid Schneider. I then checked every comment (and all figures) in all OV’s papers listed on pubpeer, a web site that I did not know existed. What I want to say here is that there MUST be consequences, severe consequences for anyone found, after a thorough investigation, to have beyond a reasonable doubt been fabricating results in scientific papers, no matter the stature of the person involved, no matter whether the “conclusions” of the papers turn out to be correct. If not, this sends a chilling message to young scientists and to the public who funds us.

  22. A new query has appeared at PubPeer about an Olivier Voinnet paper in PNAS:
    Dynamics and biological relevance of DNA demethylation in Arabidopsis antibacterial defense
    Agnès Yu a, Gersende Lepère a, Florence Jay b,c, Jingyu Wang a, Laure Bapaume a, Yu Wang d, Anne-Laure Abraham a, Jon Penterman e, Robert L Fischer e, Olivier Voinnet b,c, Lionel Navarro a,b

    Proc. Natl. Acad. Sci. U.S.A., 110 (2013) vol. 110 no. 6, 2389–2394
    doi: 10.1073/pnas.1211757110


    a Institut de Biologie de l’Ecole Normale Supérieure (IBENS), Centre National de la Recherche Scientifique, Unité Mixte de Recherche 8197, Institut National de la Santé et de la Recherche Médicale, Unité 1024, 75005 Paris, France;

    b Institut de Biologie Moléculaire des Plantes, 67084 Strasbourg, France;

    c Department of Biology, Swiss Federal Institute of Technology (ETH-Z), Zürich 8092, Switzerland;

    d Institute for Bioinformatics, Munich, Germany;

    e Department of Plant and Microbial Biology, University of California, Berkeley, CA 94720, USA

  23. Four new queries about Voinnet papers appear at PubPeer over the past weekend related to figures + one additional query that had appeared on April 21, 2015.

    Competition for XPO5 binding between Dicer mRNA, pre-miRNA and viral RNA regulates human Dicer levels
    Yamina Bennasser, Christine Chable-Bessia, Robinson Triboulet, Derrick Gibbings, Carole Gwizdek, Catherine Dargemont, Eric J Kremer, Olivier Voinnet, Monsef Benkirane
    Nature Structural & Molecular Biology 18, 323–327 (2011)
    doi: 10.1038/nsmb.1987
    Centre National de la Recherche Scientifique (CNRS), Institut de Génétique Humaine UPR1142, Laboratoire de Virologie Moléculaire, Montpellier, France.
    Yamina Bennasser, Christine Chable-Bessia, Robinson Triboulet & Monsef Benkirane
    Institut de Biologie Moléculaire des Plantes (IBMP), CNRS, Université de Strasbourg, Strasbourg, France.
    Derrick Gibbings & Olivier Voinnet
    Institut Jacques Monod, Université Paris Diderot, CNRS, Paris, France.
    Carole Gwizdek & Catherine Dargemont
    Institut de Génétique Moléculaire de Montpellier, Universités de Montpellier I & II, Montpellier, France.
    Eric J Kremer
    Y.B. and M.B. planned and supervised the project and wrote the paper. Y.B. designed and performed most of the experiments with the help of C.C.-B. R.T. initiated the project. D.G. performed experiment in O.V.’s laboratory. C.G., C.D. and E.J.K. helped perform experiments and provided valuable reagents.

    NERD, a plant-specific GW protein, defines an additional RNAi-dependent chromatin-based pathway in Arabidopsis
    Dominique Pontier, Claire Picart, François Roudier, Damien Garcia, Sylvie Lahmy, Jacinthe Azevedo, Emilie Alart, Michèle Laudié, Wojciech M Karlowski, Richard Cooke, Vincent Colot, Olivier Voinnet, Thierry Lagrange
    Molecular Cell (2012) 48(1), 121–132
    1 Institut de Biologie Moléculaire des Plantes, Centre National de la Recherche Scientifique, UPR 2357, Strasbourg 67084, France
    2 Laboratoire Génome et Développement des Plantes, Centre National de la Recherche Scientifique/Université de Perpignan via Domitia, UMR5096, Perpignan 66860, France
    3 Unité de Recherche en Génomique Végétale INRA, 91057 Evry, France
    4 Swiss Federal Institute of Technology Zurich, Department of Biology, Universitätstrasse 2, 8092 Zürich, Switzerland

    RNA-DNA interactions and DNA methylation in post-transcriptional gene silencing
    Louise Jones 1, Andrew J. Hamilton 1, Olivier Voinnet 1, Carole L. Thomas 2, Andrew J. Maule 2, David C. Baulcombea 1
    1 Sainsbury Laboratory, John Innes Centre, Colney Lane, Norwich NR4 7UH, United Kingdom
    2 Department of Virus Research, John Innes Centre, Colney Lane, Norwich NR4 7UH, United Kingdom
    The Plant Cell (1999) 11, 2291–2301

    Nuclear import of CaMV P6 is required for infection and suppression of the RNA silencing factor DRB4
    Gabrielle Haas, Jacinthe Azevedo, Guillaume Moissiard, Angèle Geldreich, Christophe Himber, Marina Bureau, Toshiyuki Fukuhara*, Mario Keller, Olivier Voinnet
    EMBO Journal (2008) 27, 2102-2112
    Institut de Biologie Moléculaire des Plantes, CNRS UPR2353, Université Louis Pasteur, Strasbourg, France
    * Laboratory of Molecular & Cellular Biology, Tokyo University of Agriculture & Technology, Tokyo, Japan
    DOI: 10.1038/emboj.2008.129

    Misregulation of AUXIN RESPONSE FACTOR 8 underlies the developmental abnormalities caused by three distinct viral silencing suppressors in Arabidopsis
    Florence Jay, Yu Wang, Agnès Yu, Ludivine Taconnat, Sandra Pelletier, Vincent Colot, Jean-Pierre Renou, Olivier Voinnet, PLoS Pathogens (2011) 7(5): e1002035.
    DOI: 10.1371/journal.ppat.1002035
    Florence Jay, Olivier Voinnet
    Institut de Biologie Moléculaire des Plantes, Centre National de la Recherche Scientifique, Université de Strasbourg, Strasbourg Cedex, France
    Florence Jay, Olivier Voinnet
    Swiss Federal Institute of Technology (ETH), Zurich, Switzerland
    Yu Wang
    Institute for Bioinformatics and Systems Biology, Helmholtz Zentrum München, German Research Center for Environmental Health (GmbH), Neuherberg, Germany
    Agnès Yu, Ludivine Taconnat, Sandra Pelletier, Vincent Colot, Jean-Pierre Renou
    Unité de Recherche en Génomique Végétale, Evry Cedex, France
    Author contributions
    Conceived and designed the experiments: FJ JPR VC YW OV. Performed the experiments: FJ AY LT SP. Analyzed the data: YW FJ OV VC JPR AY. Contributed reagents/materials/analysis tools: FJ AY LT. Wrote the paper: FJ OV.

  24. Dunoyer, P., Pfeffer, S., Fritsch, C., Hemmer, O., Voinnet, O. and Richards, K.E. (2002) Identification, subcellular localization and some properties of a cysteine-rich suppressor of gene silencing encoded by peanut clump virus. Plant J. 29, 555–567.


    Voinnet, O., Rivas, S., Mestre, P. and Baulcombe, D.C. (2003) An enhanced transient expression system in plants based on suppression of gene silencing by the p19 protein of tomato bushy stunt virus. Plant J. 33, 949–956.


  25. A correction has appeared for a 2009 PLoS Genetics paper:
    “Fig. 3 is incorrect. A mistake was made by the authors during the assembly of panel 3B. The loading control U6 is duplicated for the XY D0 and D2 samples. The authors apologize for this error and have provided a corrected version here.”

    1. The correction has been replaced with an expression of concern:
      Published: June 29, 2015
      DOI: 10.1371/journal.pgen.1005377
      “Concerns have been raised regarding some of the data in the PLOS Genetics article “RNAi-Dependent and Independent Control of LINE1 Accumulation and Mobility in Mouse Embryonic Stem Cells”, notably in panel A in Fig 4, in panels A and F in S4 Fig, and with the statistical analyses used to produce Fig 2. The authors have responded to these concerns, acknowledging that some errors were made in figure preparation and with some statistical tests, however a final resolution has not yet been reached, and the matter is being evaluated by the authors’ institution. This Expression of Concern should not be considered as a statement regarding validity of the work but rather as a notification to readers, and an intent to provide additional information as it becomes available.”

  26. A corrigendum for a 2006 Nature Genetics papers has been published.

    The original:
    Nature Genetics 38, 258 – 263 (2006)
    Published online: 22 January 2006 | doi:10.1038/ng1722
    Induction, suppression and requirement of RNA silencing pathways in virulent Agrobacterium tumefaciens infections
    Patrice Dunoyer, Christophe Himber, Olivier Voinnet


    An excerpt: “In the version of this article initially published, in Figure 5c (miR162) and Figure 5e (miR171), the rRNA images were duplicated without explanation.”

    No apology is offered.

  27. Another retraction appears in The Plant Cell due to figure manipulation:
    RETRACTED: The Arabidopsis Malectin-Like Leucine-Rich Repeat Receptor-Like Kinase IOS1 Associates with the Pattern Recognition Receptors FLS2 and EFR and Is Critical for Priming of Pattern-Triggered Immunity

    Ching-Wei Chen a,1, Dario Panzeri a,1, Yu-Hung Yeh a,2, Yasuhiro Kadot ab,2,3, Pin-Yao Huang a, Chia-Nan Tao a, Milena Roux b,4, Shiao-Chiao Chien a, Tzu-Chuan China, Po-Wei Chu a, Cyril Zipfel b and Laurent Zimmerli a,5

    a Department of Life Science and Institute of Plant Biology, National Taiwan University, Taipei 106, Taiwan
    b Sainsbury Laboratory, Norwich NR4 7UH, United Kingdom (original)

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.