Neuroscientists retract Cell autism model paper for “improperly assembled” figures

cell 1-17-13A group of authors have retracted a Cell paper describing a mouse model of autism because of image problems.

The senior author on the paper — there were 22 altogether — is Paul Worley of Johns Hopkins. Here’s the notice for “Enhanced Polyubiquitination of Shank3 and NMDA Receptor in a Mouse Model of Autism:”

Our paper reported an analysis of a mouse genetic model that deletes the C terminus of Shank3 to mimic human mutations that cause autism spectrum disorder. Figure panels for several polyubiquitination assays were improperly assembled, leading to multiple repetitions of bands in western blots of the lysates. These errors did not affect the quantitative analysis of polyubiquitination because this analysis was performed as described and was not dependent on representative western blot images. In light of the figure preparation issues, we feel that the most responsible course of action is to retract the paper. We sincerely apologize to the scientific community for any misunderstanding that these errors may have caused.

The study has been cited 32 times, according to Thomson Scientific’s Web of Knowledge, and was funded by the NIH along with a number of autism foundations.

It’s not clear how the authors became aware of the improper assembly, but a site called “Autism Researchers” posted allegations about falsification in the paper’s figures in May of last year.

We’ve contacted Worley for for more details, and will update with anything we learn.

Update, 11 a.m. 1/18/13: In response to questions about whether other papers would be affected, and whether the case had been referred to the U.S. Office of Research Integrity (ORI), Johns Hopkins Medicine sent us this statement:

We are aware that, at the request of the authors, the journal Cell on January 17, 2013, retracted the manuscript entitled, “Enhanced Polyubiquitination of Shank3 and NMDA Receptor in a Mouse Model of Autism.”  The responsible conduct of all phases of research, including the accurate reporting of research data, is at the core of our mission, and we are committed to ensuring that all Johns Hopkins University research is conducted to the highest scientific and ethical standards. We can assure you that Johns Hopkins University School of Medicine takes the circumstances that led to the retraction extremely seriously.

Here’s the 2011 press release Hopkins sent out about the research.

Update, 12:40 p.m. 1/18/13: The ORI has been notified of the case, Hopkins tells us.

45 thoughts on “Neuroscientists retract Cell autism model paper for “improperly assembled” figures”

  1. Twenty two authors – too many cooks spoil the….How come none of them spotted this while submission and revision…

    1. Good point. I am convinced massive author lists, a sine qua non of most big-data and superstar papers these days, fosters a “see-no-evil, hear-no-evil, speak-no-evil” mentality.

    2. If you check the provided link for a detailed description of what is wrong with the figures in the retracted paper you will see, apart from regular copy-and-paste jobs, some bands being flipped and contrasts being altered.Typically, this indicates an attempted concealment of band re-use although, I am sure, there is an innocent explanation why this has been done.
      It reminds me of Brits going across the English Channel to France to stock up on cheaper booze and fags. They can bring this stuff home duty-free as long as it can be reasonably assumed quantity-wise that the goods are for personal use only. But if you try to transport the same goods in a concealed manner, it is considered smuggling irrespective of the quantity involved. So, a carton of cigarettes down your pants – smuggling, but the same carton of cigarettes in the trunk of your car – legit. Everything boils down to intent.

    3. In most situations, only individuals listed as 1 have input into writing the drafts and the final contents. All other authors only contribute to specific experiments; therefore they do not participate in contributing to the the body of the overall paper. The paper did under go peer review which is where I am surprised that critics did not pick up on the mistakes.

  2. Can anyone help me parse these words, quoted from above:

    “. . . panels for several polyubiquitination assays were improperly assembled, leading to multiple repetitions of bands in western blots of the lysates. These errors did not affect the quantitative analysis of polyubiquitination because this analysis was performed as described and was not dependent on representative western blot images.”?

    To me, close reading the first sentence leads me to believe the western blots images supposedly corresponding to adjacent protein ubiquitination data were taken from other, possibly irrelevant experiments. In that case, the polyubiquitination data are useless, because we have no idea of the steady state protein abundance data to which they correspond. Who cares whether the quantitations of the ubiquitin ladders are intrinsically valid or not? You still need the steady state data to interpret the overall result!

    This illustrates what I have seen far too often in retraction notices in that authors can insert whatever non sequitir they wish in their notice. I am highly suspicious that this practice occurs to obscure the truth, which is that either horrendously sloppy or overtly fraudulent behavior occurred, using overly complicated terminology to imply that the authors discovered a relatively modest flaw in their data presentation and are now acting ever so magnanimously to retract the entire result, most of which of course they want us to think is still valid.

    Journals should insist on plain language in these notices that explicitly states what happened to comprise the integrity of which figures.

    1. I dunno, I rather like it
      “These errors occurred during the (electronic) assembly of the figures, with internal loading controls being misassembled with the incorrect experimental channels. ”

      We need more of it, imagine reading a story of a fatal car crash.
      “The vehicle proceeded at an inappropriately selected velocity up to and including the point that a cylindrical formation of narrow diameter and with a source of illumination affixed to one end rising perpendicular to the selected route of aforementioned vehicle resulted in a sudden decrease of the heart rate of the operator.”

      Much nicer to read.

      1. 🙂

        Yes – I was including that prior Cell retraction in my group of recent notices filled with unnecessary complex language. “Incorrect experimental channels”?? Could be the result of expensive consultations with lawyers and risk management types prior to issuing the retraction notice.

    2. absolutely; journals have a duty to tackle phrases such as “In light or figure preparation issues, we …. retract the paper”. This just doesn’t stand up to logical thinking. Either the DATA is correct and supports the arguments made in the paper, or it doesn’t. It comes down to one word in the retraction notice:
      “improperly” – meaning wrongly (in case just publish the right ones), or “improperly” – meaning falsely.

  3. Can someone explain why there is so much need for “assembly” of figures? My gels have the size standards or controls, a few “negative” lanes or “relevant comparison” lanes to show what happens under other circumstances, and the lanes showing the experimental circumstances. The photo doesn’t contain lanes that need to be removed in order to show the reader an uncluttered picture of the results. At most, I crop the empty regions above and below the region containing the bands (although I prefer to show the entire gel in case there are any questions about the quality of the data). If a researcher needs to assemble the figures, is s/he running a dozen lanes with contents from several unrelated experiments on the same gel?

    1. Exellent question JudyH!

      Cropping an image to remove the top and bottom of gels that are irrelevant to the data is ok. This is becasue no actual data is present in those areas. There is no reassembly, is just like cropping a picture and removing your other half after divorce 🙂

      However, splicing lanes is not sound scientific practice. I’ll explain why – they then have to be reasembled. Experiments that are repeated several times are often designed to have one final “master image” that has all the relevant data (bands) in it, often specifically for publication.

      Splicing is the thin edge of the wedge – flipping, squeezing, rotating (vertically and horizontally), altering brightness of the same image and using it twice for two different conditions, contrast enhancements and the like are not acceptable.

      I am into image analysis a little bit but I’d never imagine that scientists would do half of the stuff I stumbled across on the science-fraud site until dedcicated individuals pointed it out with big red arrows.

      That then led me to look at a few other papers of colleagues…..and it opened my eyes.

      It is wholly unnacceptable – that we all agree.

      1. concerns J Neurosci. 2012 Mar 28;32(13):4651-9. doi: 10.1523/JNEUROSCI.3308-11.2012.

        Epigenetic modulation of Homer1a transcription regulation in amygdala and hippocampus with pavlovian fear conditioning.
        Mahan AL, Mou L, Shah N, Hu JH, Worley PF, Ressler KJ.

        Source
        Department of Psychiatry and Behavioral Sciences, Yerkes National Primate Research Center, Emory University, Atlanta, GA 30329, USA.

        http://www.jneurosci.org/content/32/13/4651.full.pdf+html

        Figure 2.

        1. Fernando

          I am sure there is a perfectly innocent explanation for that figure 2 – the data in that figure being central to the entire paper.

          Perhaps an error while assembling the figure?

          It would be useful to see the original image as the work is very recent.

          To echo Average PI comments – is the work valid or not now we know the potential errors present?

          1. LGR – look again if you wish.

            Copy the image into powerpoint or similar software (print screen will do for this purpose).

            Enlarge it 600%

            Change the contrast a little – take a close look at the 4th column from the right, lower panel.

          2. Yes, I did see that and was equivocal at first, but on reflection you are right. It has been spliced.

            But it is without significance – unless you are saying they are pretending to have knock-out mice, there is simply no incentive for there to be anything other than a perfectly mundane explanation.

            Genotyping WT, Het. and knock-out mice is so straightforward that provided they actually have the mice they aren’t going to fake it.

          3. Fernando didn’t spell it out but there are at least 3 splices in Fig. 2a. Get full size figure here

            http://www.jneurosci.org/content/32/13/4651/F2.large.jpg

            Upper panel spliced after lane 6. You can even see the irregular join along the top edge.

            Lower panel spliced after lane 1 and lane 5. Again the join is visible along the top edge for lane 1.

            So it is not just the control that is spliced and it does seem all a bit arbitrary to me…

  4. Something tells me that there is a need for new supplemental section for all papers. This should include all original images (untouched) involved in the creation of all figures. I understand the need to clean up images. Data presentation is an important part of any paper. But. The rules are clear on what is and what is not allowed. By providing all original images upon submission, there will be no wiggle room when it comes to manipulation.

    1. The lanes lanes 8 (+/- lung) and 11 (+/- MEF) – as per MB – have the appearance of being the same.

      Despite that I suspect that one will be OK too, the actb controls below it are clearly different, albeit on a separate pane. If you genuinely didn’t have a knock-out mouse you could easily generate a gel like that without resorting to photoshop. These types of experiments all come down to trust in the end. How do we know what is loaded into each lane? They could have used the same three samples for each of the four tissues shown. They could have alternated between 1X RNA, 0.5 X RNA and 0 X RNA and probed for their gene on one gel and then run all 1 X RNA on another gel, probed for ACTB, and then mixed and matched them. Why would you not do that rather than using photoshop to slot one lane in the middle? Believe me when I say only imbeciles use photoshop to do fraud – anything you can do with photoshop you can do using orthodox methods and still be presenting the exact same falsified conclusions

      At best you might have had one of the lanes got chewed up by an RNAse (although you would expect the loading controls to have been chewed up also) so they slotted another lane in, but I can’t see any splice marks. Again you would have to wonder why they would not either just run the experiment again with more care, or else just dropped the least significant tissue for the purposes of their paper.

      So I would say, at this point, the likelihood of the existence of these knock-out mice was good and would even lay a small bet that an inspection of the original films showed no misconduct – but only a small bet.

      1. Similar, but different when you look at all the surrounding background information in the image. 1A is a problem, too. Lane 4 is a copy and paste of lane “3”.

    1. There’s a pattern of background spots that say this must be a duplication.
      Again you wonder why they bothered. You could have just dropped the primers out of the RTPCR for those lanes – would have achieved the same result.

      If these guys are committing systematic falsification they really need a good consultant to tell them how to do it.

      1. You need to understand the psychology of fraud. No one is going to put in grant proposal about constructing a knock-out mouse, spend the whole time lying on the beach and then fake up a couple of gels with photoshop and submit a paper.

        That is not how it is done. You make the knockout mouse and you find it has no phenotype (maybe because you had an error in the construct or just the gene is redundant). You then fake the results the prevailing understanding of gene predicts should occur in such a knock-out. And then move on to the next grant proposal

        You need to get yourself a sabbatical in a fraud producing lab so you understand how it works.

        1. In reply to littlegreyrabbit January 20, 2013 at 8:11 am

          “You then fake the results the prevailing understanding of gene predicts should occur in such a knock-out”
          I think that is quite insightful. That is what the higher end do.

          “That is not how it is done.” How does one know? There is more than one way, and perhaps more than one way at a time. Sometimes it may even be unwitting, and there is simply selection for those who push out the most papers.

          The question is: why the odd papers? Do they keep the nice images in a special drawer, and only publish odd images?

          It is not the best of all possible worlds.

        2. Good point there. Probably a subset of cheaters are not lazy bums, but they worked hard at something just to find out that it produced a null effect. This would result in no publications for the work, in a dent in one’s track record as one would have to report a null effect in the progress report etc. The program officers will not be happy, as they too are under pressure to report that the grants they funded produced Nature type results to Congress. Some may just fold under the pressure and just make up stuff. Stapel, of course, is another story…

          1. Agreed. Everyone in the system feels pressure to find positive results, and amazing ones to boot. And as quickly as possible. We made this system. We can change it if we choose to.

  5. Agreed fig 1c. The Lung +/- and MEF +/- are identical.

    Also a splice at left edge of MEF +/+

    Also the ActB control – two bands on right are identical.

    Also the ActB control, two bands 2nd and 3rd on left also identical.

    The entire paper hangs on this data.

    If anyone needed evidence that blots can make or break a paper – I suggest they read it.

  6. RE: concerns J Neurosci. 2012 Mar 28;32(13):4651-9

    On Sun, Jan 27, 2013 at 5:18 PM, John Maunsell, Journal of Neuroscience wrote:
    Dear Clare Francis,

    We have now completed our investigation of your allegations about this article.

    1) We concur that there are clear signs of image splicing in Figure 2.

    2) We agree that the error bars in Figure 3b are similar. However, this is expected because the plot involves repeated measures on the same subjects. The observations at different times are therefore not independent and correlated sample distributions are expected.

    3) We disagree with your assertion that the error bars in Figure 4a are indistinguishable. They have different lengths.

    We believe that authors should avoid constructing figures using images taken from different parts of the same gel, or from different gels, and that when this is necessary it should be made explicit by the arrangement of the figure (e.g., with dividing lines). I have contacted the authors and instructed them that future submissions must meet this standard. Because we have no reason to believe that Figure 2 misrepresents the findings from the experiments described or that this representation would impede anyone attempting to replicate the results, we do not think that a published correction is necessary or would be helpful.

    Sincerely,

    John Maunsell

    Editor-in-Chief
    The Journal of Neuroscience

  7. Well, guys, they do have the mice available to the research community through Jackson Labs. Anyone can take a shot at repeating these results. That doesn’t sound like a lab hiding something. A lot of labs never share their run-of-the-mill mice with anyone, let alone highly controversial and potentially lucrative mice.

    1. In reply to BMc February 20, 2013 at 3:43 pm

      For information. Which mice and from which group? It got a bit complicated.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.