Authors retract PNAS paper questioned on PubPeer after original films can’t be found

pnas31912PubPeer leads the way again: The authors of a paper about Parkinson’s disease in the Proceedings of the National Academy of Sciences (PNAS) have retracted it, several months after a commenter highlighted the exact issue that led to the article’s demise.

The paper, originally published in September 2013, was called into question by a commenter on PubPeer in July 2014, who identified two of the paper’s figures as duplications:

The WB for H3 in Figure 4B is very similar to the WB in Fig. S3, but horizontally rotated. Please note that the blots represent different experimental conditions.

Here’s an annotated figure the commenter provided (post continues after the image):

berthier et al dup imgAnd here’s the notice for “PINK1 regulates histone H3 trimethylation and gene expression by interaction with the polycomb protein EED/WAIT1:”

The authors wish to note the following: “We encountered a major problem in addressing the issue of duplication of panels in Fig. 4 and Figs. S2 and S3. We have been unable to find the original films from those experiments and new quantification for Fig. 4B renders different results from the published information. Without the original films we cannot rely on the certainty of the results shown in the paper and therefore we cannot maintain the conclusions. Accordingly, we request a retraction of the paper. We truly apologize to our colleagues for any detriment in their work caused by the conclusions from our paper.”

We got in touch with Alan Fersht, who edited the paper for PNAS and gave us a quick timeline:

Pubpeer’s posting was on 10 JulyOn 11 July, the authors notified us of the problem with the paper, and we immediately set up a process for verifying the work to decide on a possible retraction.

The paper has been cited once, according to Thomson Scientific’s Web of Knowledge. We’ve reached out to the authors and will update with any new information.

29 thoughts on “Authors retract PNAS paper questioned on PubPeer after original films can’t be found”

    1. And who said that two lines of simple facts couldn’t be powerful? Excellent detective work, and continuation of more to smoke out those who can’t find the “originals” of “horizontally rotated” figures. What a lame excuse, in fact. I wish we had such eagle eyes to help out with figure and gel PPPR in the plant sciences. We are desperately in need of assistance as we are overwhelmed with what I believe are cases of partial duplication in the plant science literature and not enough eyes to search and hands to compile. Even with several decades of experience, it is still difficult for some of us involve din PPPR to identify such cases, either because we have no training, or are not sure what to be looking out for. Worst of all, we are dealing with a severe editorial firewall that is almost 100% unreceptive to such 2- or 20-line factual claims. We have decided to move the plant science PPPR a step closer to PubPeer in 2015, making a single entry in PubPeer and one in PubMed Commons if possible for every error or duplication we find. I was originally a little skeptical of PubPeer, but recent weeks have shown an exponential increase in its benefit, efficiency and usefulness, so not only does the PubPeer commentator deserve a hearty pat on the back, so too does PubPeer deserve a standing ovation for providing a suitable platform for exposing the truth.

      1. I wish someone could put together a web tool like Google Image Search that could search for images within a research paper, or within a collection of papers by a group of authors. The technology to find these duplications already exists, but it has to be put in an easy-to-use format that peer reviewers and editors can rely on, much like TurnItIn can be used to check for plagiarism in student papers.

        1. I feel that in addition to the authors, this is the responsibility of the reviewers. The reviewers should also be held responsible because they did not perform their job well. Responsibility comes to Chief editors of the journals too, who do not select appropriate reviewers and do not validate the reviewers comments. In addition to retraction of a paper, actions should be taken against the reviewers as well and next time manuscripts should not be sent to these reviewers.

          1. No. It is the resposibility of the journal/editorial staff, not reviewers, to search for these thing. Reviewers (doing their job for free) are supposed to check only for things the editors cannot do. They should provide a view of an expert in the field and say if the conclusions are novel, interesting, and supported by the data. The problem is that journals are outsourcing as many things as possible to the reviewers to save money for hiring more (technical) editors.

          2. In my opinion, Ostep, you are correct about the balance between the referees’ and the journal’s responsibilities.

            Anyone who has done a significant amount of refereeing must know that the way they approach a paper is to understand the science and (as always insisted upon by the journals) assess its likely impact. Those of us with only one brain cannot do this and simultaneously check “for these thing” as you put it.

            Even so, referees do report these thing to be unacceptable all the time, leading to manuscript rejection. There is though no way to assess how well they are doing since this information is entirely private to the journal.

            Oftentimes it would be impossible at the refereeing stage to find such issues, e.g. if a modern era submission provides huge supplementary data files, are we supposed to rattle off some R code to search for dodgy numbers?

            If you as a referee want to check for problematic images, which is rather simpler, you still have to make a decision to stop looking at the science and go about setting up the figures at highest resolution on a good computer monitor. And if you have to start comparing figures in multiple papers from overly prolific groups, it will take up many hours of your life.

            For example, even though previous commentary on PubPeer indicated that I could very easily find stuff to highlight in the comments here,

            http://retractionwatch.com/2014/09/22/scientist-threatening-to-sue-pubpeer-claims-he-lost-a-job-offer-because-of-comments/

            doing the figure comparisons and image preparation still took plenty of hours. Grant writing, mentoring, teaching, attending meetings (and, who knows, perhaps actually doing a spot of scientific research) are going to take priority for most researchers, and yet they are still doing their reviewing for free.

  1. I don’t know what depresses me more – the fact that they manipulated the data or the fact that they did it so lazily….(I mean if you’re going to rotate an image at least scrub the tell-tale marks etc).

    Sigh.

  2. PNAS seems to be really serious about fixing their scientific record, one has to salute that. I see the high number of their retractions as overall a very good sign. Most journals, also the top-impact-factor ones, would print a corrigendum, or ignore the issue.
    There is of course an issue left at PNAS of the peculiar backdoor-submission for NAS-members (http://laborjournal.de/editorials/876.lasso, in German), but the editorial office has apparently less leverage there.

    1. I agree there are signs of improvement, and perhaps they are learning. However in 2013 they were rather less receptive, first claiming “no problem”

      http://ferniglab.wordpress.com/2013/03/25/data-re-use-warrants-correction-at-pnas-see-no-evil-hear-no-evil-speak-no-evil/

      and then allowing the usual ‘correction”

      http://ferniglab.wordpress.com/2013/03/25/data-re-use-warrants-correction-at-pnas-see-no-evil-hear-no-evil-speak-no-evil/

      @Gary, cheer up – In the present case, at least the authors didn’t reach into the cupboard full of blots and pull one out that “looked right” and send it off to the editors.
      So we move forward, but it is slow and messy trench warfare.

      1. Dave, I think it may be because of the very same backdoor problem, the paper your refer to was “Communicated by Mostafa A. El-Sayed”, thus not properly peer-reviewed.
        At least the “communicated” route has been abolished by Randy Sheckman, it was not popular with NAS members anyway. But the “contributed” route is unassailable for now, NAS members are unlikely give up that one.

        1. Leonid, I hadn’t checked this level of detail, too focussed on the figures! I guess that the implication is that the “non” or “lite” peer reviewed papers are also not properly subjected to post publication peer review and so largely exempt from COPE guidelines.
          They may give up on the “contributed” route when enough people call by to say the emperor has no clothes!

    2. I absolutely don’t share your optimism that PNAS has any kind of coherent policy to clean up their scientific record. How many people would they have to hire to go through the decades-old mess of dodgy member submissions?

      Nevertheless, this is a fast retraction and as such should be praised. Now it can be assumed that the editor, the venerable Fersht, greatly favours evidence-based science. And Fersht states that the authors immediately got in touch. So there is some good authorial responsibility there too.

      If an institutional investigation is on-going, then at this moment in time we cannot expect more details than have been given. But this does look as though one should provisionally commend those who have chosen to set the record straight.

      1. There is of course a long way to go yet, but PNAS stays in the spirit (and eventually also deals with the contribution backdoor), while other journals consider joining in and respond to post peer review comments appropriately, there still may be some hope for the future of science publishing left.
        We should encourage the current PNAS policy, but not blindly of course.

    3. Without doubt, there are some positive signs, but the sins of PNAS are manifold and they still have a long way to go. We should not forget their former ridiculous practice of publishing “communicated submissions” (the only form of submission until 1995, finally eliminated in 2010 after the Lynn Margulis-issue) by which the prior consent of a single member of the Academy was enough to get almost anything into print. Moreover, the only slightly better habit of “contributed submissions” still offers a way for members to get a generous four papers/year published in a shiny, high-impact journal instead of some less prestigious periodical where it should – based on its own merits – truly belong.

      These practices of PNAS did a lot of harm to the community in the last decades, by offering venue for barely peer-reviewed papers, and by aggravating the Matthew-effect in the world of science.

  3. This retraction is quite unusual. In general this type of problems are solved by publishing a corrigendum/erratum with a new blot, after claiming an error during the preparation of the figure. Authors and Journals are ok with that… no matter if it was an obvious fraud…

    How essential could be a blot panel to invalidate a whole article??

    I think that something is missing here… Anyway, the authors did the right thing!!

    I wonder if other articles of the first author have issues. We’ll need to wait to the authors’ reply to RW.

    1. One can safely assume there was more than a single blot issue. In fact, authors indicate that also ” new quantification for Fig. 4B renders different results from the published information”. Thus, there is likely to be a lot more to warrant a retraction 😉

    1. 🙂 Not that unusual. Most high impact journals have a few papers with very high numbers of citations and still quite a few with few to no citations. It’s usually the “block-buster” papers that push the IF. One of the many facts that make the IF-hunt so ridiculous……

  4. In addition to the problems the authors highlight, a quick glance reveals some undisclosed splicing seams in the lower panels of Figure 1B (in addition to those actually indicated on the figure). Furthermore, in Figure 2A, the right-most band in the Coomassie panel is identical to the left-most band in the Coomassie panel of Figure 2B, despite completely different experimental conditions. So yeah, as others here have commented, there’s probably more to this story than meets the eye.

    1. Speaking broadly of gels, should a SDS-PAGE gel be published with separate lanes, including the control, marker and sample lanes, all of which are separated by white borders, i.e., stacking of lanes from one or more gels to only show the lanes of interest? I have one paper in hand with such a gel that also does not show the loading wells. Without making reference to the actual paper, would such a scenario be a case of gel or figure manipulation? Should a RW commentator respond “Yes”, then I will post the link to the actual paper for verification, or post my concern at PubPeer.

      1. It all depends on how it really looks like. If bands/lanes are spliced together as if to make them appear they were run together that way on the gel although the were not, then it’s wrong. Simply showing only small parts of a gel is considered ok. Clearly indicating the splincing is also ok, but depending on the experiments reviewers and peers will question the conclusions etc.

        Nature cell biology requires all gels/blots full size uncropped and publishes them as supplementary online material. That would really end most of these discussions.

          1. Re. the image presented in the PubPeer thread to which Scrutineer links above, I have just left a comment there defending its integrity (the rectangular section of the image containing the bands displays a different visual quality from the background area, but the rectangular outline follows the 8-by-8-pixel grid of the JPEG compression algorithm — in short it’s exactly the sort of visual appearance you expect from JPG compression).

          2. So, Herr Doktor Bimler, in your opinion, how many of the comments on PubPeer regarding Fazlul H. Sarkar’s figures are incorrect, or inaccurate? I think there are not too many people out there with the skills to make this assessment, but we need a “verification” agent to check the Sarkar papers’ gels in their entirety and see if the criticisms made on PubPeer are in fact correct, and valid.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.