McGill committee says Nature figures were “intentionally contrived and falsified”

msaleh
Maya Saleh, via McGill

An associate professor at Montreal’s McGill University is correcting two papers, one of them in Nature, after a university committee found evidence of falsification, Retraction Watch has learned.

Concerns had been raised about four papers by Maya Saleh and colleagues:

According to a report by the McGill committee, highlights of which were obtained by Retraction Watch, two figures in the Nature paper had been “intentionally contrived and falsified.” One of those figures was duplicated in a PNAS paper, which also contained an image that  had incorrectly labeled some proteins.

The committee said they could not determine who had falsified the figures, but said that there was no evidence it was Saleh, who was the only common author among the four papers.

The committee also said that figures in the Cell Host and Microbe paper contained “touchup of blemishes,” which they said was “not an acceptable procedure.” They said the original films could not be found, and noted that this was “not in compliance with the McGill Regulation on the Conduct of Research.” Irregularities in the Immunity paper, they said, were “due to artifacts created during the preparation of the scans for publication.”

The Immunity and Cell Host and Microbe papers could stand, said the committee, which recommended corrections for the Nature and PNAS papers. Saleh tells Retraction Watch that Donald Nicholson, the senior author on the papers and Merck’s vice-president and worldwide head of basic research in immunology and infectious diseases, would be handling the corrections. Saleh did a postdoc at Merck from 2001 to 2004, but her affiliation on the Nature paper is the La Jolla Institute, and on the PNAS paper is McGill.

Some of the published work was funded by grants to Saleh from the Canadian Institutes of Health Research.

Saleh and colleagues have an unrelated Corrigendum in Nature.

We’ve contacted McGill provost Anthony Masi and research integrity officer Abe Fuks, and will update with anything we learn.

Update, 10:30 p.m. Eastern, 1/29/13: McGill associate provost Lydia White responded:

The University received allegations of research misconduct and conducted a thorough investigation following the procedures set out in the University’s Regulations Concerning Investigation of Research Misconduct (available from the following link: http://www.mcgill.ca/secretariat/policies/research/). The relevant findings were transmitted to the editors of Nature and the Proceedings of the National Academy of Sciences and the authors are in the process of correcting the record.

We are not at liberty to answer your question concerning sanctions as we are bound by the Act Regarding Documents held by Public Bodies and the Protection of Personal Information.  We have not reported this case to Canadian funding agencies, given that the research reported in these two journals was not supported by grants from Canadian agencies.

The University would like to assure the community of its commitment to the highest standards of scientific integrity. We believe that we have taken appropriate measures to correct the scientific record and, at the same time, show respect for the privacy and integrity of the members of our faculty.

We responded that we were “a bit puzzled that you are not reporting this case to Canadian funding agencies.”:

As you note, neither the Nature nor PNAS papers were funded by those agencies. However, the Cell Host & Microbe paper was:

“This work was supported by grants from the Canadian Institutes for Health Research and The Canada Foundation for Innovation. M.S. is a Canadian Institutes for Health Research New Investigator, and S.G. is a holder of a Canada Research Chair.”

Your committee’s report on this case said, as we noted in our post, that the figures in the Cell Host and Microbe paper contained “touchup of blemishes,” which were “not an acceptable procedure.” They also said the original films could not be found, and noted that this was “not in compliance with the McGill Regulation on the Conduct of Research.”

Wouldn’t that therefore be grounds to report this case to CIHR?

118 thoughts on “McGill committee says Nature figures were “intentionally contrived and falsified””

  1. So the committee recommends corrections but the authors opt for retractions?
    This aside, I do not understand how they could not determine who had falsified what. While a group amnesia is a remote possibility, it is likely that the senior author encouraged others to be less than forthcoming. The committee should have pinned everything on the senior author then; this would have refreshed everybody’s memory.

    1. The authors are correcting the papers, not retracting them. This was correct in the post except in one reference in the fourth-from-last paragraph, which we’ve fixed. Thanks for catching it.

          1. yes, DRG. Co-corresponding authors. Some of these papers have reasonably good citations!

  2. FYI the first paper (Cell Host Microbe) was reported on science-fraud.org on the morning we shut the site down (Jan 2nd). The image in question is still available, and it does indeed appear to involve “touch up of blemishes”.

    http://www.science-fraud.org/wp-content/uploads/2012/12/Saleh1.jpg

    Of course, we will never know if the “blemishes” in question were merely noise, or unwanted bands, since the original film is gone. Coupled with the other examples, it is surprising the journal chose to correct rather than retract.

    1. It should be emphasized that it is an internal committee recommending correction rather than retraction. We will see what the journals do… There are certainly a lot of work highlighted at Science Fraud showing up on Retraction Watch the last few days…

    2. Firstly I’d like to pass on thanks to Paul – your efforts to improve scientific integrity are appreciated by many.

      Secondly and specifically – it’s absolutely clear that these papers should be retracted. Freeloader is right – this is the decision of the journal, not the university committee. So let’s look at Nature’s definitions on this:

      Corrigendum. Notification of an important error made by the author(s) that affects the publication record or the scientific integrity of the paper, or the reputation of the authors or the journal. All authors must sign corrigenda submitted for publication. In cases where coauthors disagree, the editors will take advice from independent peer-reviewers and impose the appropriate amendment, noting the dissenting author(s) in the text of the published version.

      Retraction. Notification of invalid results. All coauthors must sign a retraction specifying the error and stating briefly how the conclusions are affected, and submit it for publication. In cases where coauthors disagree, the editors will seek advice from independent peer-reviewers and impose the type of amendment that seems most appropriate, noting the dissenting author(s) in the text of the published version.

      Let’s keep this simple – falsified images are invalid, so this needs a retraction. This is common sense – if there was proven misconduct affecting at least part of paper, particularly if no one will own up to it, it is inevitable that the rest of that paper be considered untrustworthy since all the authors remain under suspicion. So it amazes me that the McGill committee would defend a paper where they themselves have determined that there was misconduct.

      In Nature’s editorial (29th April 2010) on this, they leave little room for doubt: ‘If an institution’s report concludes that misconduct occurred, we usually insist on a retraction — and will issue the retraction ourselves if the authors refuse to comply.’ (see http://www.nature.com/nature/journal/v464/n7293/full/4641245a.html)

      It will be interesting to see what happens – Nature already have a track record in Correcting papers that should be retracted (see other Retraction Watch megacorrections, and the Sato case in particular where they allowed correction of a paper that was subsequently retracted due to countless easy to spot falsifications). So nothing would surprise me.

      1. AMW – That would be Kato (as in Shigeaki) but not Sato. That’s important because Sato may be completely innocent whereas Kato is a name forever linked with the absurd antics of Inspector Clouseau. Perhaps David Niven could star as the mysterious Pink Blotter?

        1. Thanks and apologies – I am referring to:

          DNA demethylation in hormone-induced transcriptional derepression.
          Kim MS, Kondo T, Takada I, Youn MY, Yamamoto Y, Takahashi S, Matsumoto T, Fujiyama S, Shirode Y, Yamaoka I, Kitagawa H, Takeyama K, Shibuya H, Ohtake F, Kato S.
          Erratum in Nature. 2011 Dec 1;480(7375):132.
          Retraction in Nature. 2012 Jun 14;486(7402):280.

    3. The image shows how clever some of the image manipulators are. I don’t necessarily blame the reviewers for missing those:
      http://www.science-fraud.org/wp-content/uploads/2012/12/Saleh1.jpg

      Red arrows do help! We need the site back!

      PNAS March 18, 2008 vol. 105 no. 11 4133–4138
      http://www.pnas.org/content/105/11/4133.full.pdf+html

      Figure 2B C299A. Both lanes.
      Identical to ……
      Figure 3B. C299A ATAD319 (WT), Both lanes.

      Enlarge about 400%, change contrast to -47% – there are several identical markers demonstrating the lanes are identical.

      1. I don’t think that these are image manipulations. To me this looks much more like an artifact from the film developer. I remember seeing recurring patterns on films coming from one of the older (and not well maintained) developers in a previous lab. Sometimes I wonder if these image sleuths are truly familiar with all the technical problems one can encounter in developing films…

        1. Stewart is mostly correct. No two people agree all the time. Any tests will have false positives and false negatives. You get around this by repeating the tests, and using more tests, looking at more papers.
          The overwhelming tendency is to ignore things which truly are manipulations.

        2. Good point. It can explain the pattern and it is a more likely explanation than a “clever” manipulation. For sure a manipulation like that would not have been very “clever” and an old film developer pattern sounds much more likely.

      2. @Stewart
        Quote: “Red arrows do help! We need the site back!”

        What’s really funny is that in the ORI presentation on image manipulation (http://allenpress.com/system/files/pdfs/library/presentations/John_Krueger_ET08.pdf which you so kindly provided) one of the slides highlights the technique “author tells us where to look” as a form of image manipulation…

        So: Red arrows do not necessarily help. They just draw the viewers attention to particular details which fit the presenters argument.

    4. @Paul Brookes, Please keep us posted when you start your new website. I got lot of image manipulations to share with your new website.

  3. I hope websites like this are pushing Nature to behave like it cares a little about the integrity of the work published in its journals. Anyone know what is happening with the Nature paper from Skutella’s group?

  4. So Figure 4 of the Nature paper
    http://www.nature.com/nature/journal/v440/n7087/fig_tab/nature04656_F4.html#figure-title
    appears as figure 6 of the PNAS 4 years later?

    That is the height of the misconduct. How on earth can the PNAS paper not be retracted?
    With the Nature paper Figure 4, panel C – has it been spliced between the beads lane and Casp 1 lane? If so they have gone to some trouble to remove any splice lines and it is only from the distortion in the bands that you can see something has happened.
    Or have I got the wrong end of the stick? The lowest panel is constructed out of 3 separate blots – but is that legit? The article itself is behind a pay wall and I am not finding the figure legend that helpful for what is being shown

    1. Little Grey rabbit

      Great find!

      But…I see figure 4C in
      http://www.nature.com/nature/journal/v440/n7087/fig_tab/nature04656_F4.html#figure-title

      Is not identical to figure 6 in the PNAS paper in
      http://www.pnas.org/content/105/11/4133.full.pdf+html

      You may notice that in the nature paper there are four Ig light chain bands and in the PNAS paper only 3 Ig light chain bands.

      Also note the top bands in the nature paper are 50 KDa (caspase-12) and but are shown to be 75 KDa (asterix* marked unknown) in the PNAS paper.

      1. Nice work Anonymous…

        I agree the 1st lane is certainly duplicated (with some exposure difference or Photoshop effect). The right lane is less clear…

        A question for Ivan: is the McGill report available publicly somewhere, or could you publish it? It would be helpful to compare their findings with what RW readers have uncovered.

      2. I am sorry, I didn’t realize this was controversial. For the record I was just following the summary of the McGill report above

        According to a report by the McGill committee, highlights of which were obtained by Retraction Watch, two figures in the Nature paper had been “intentionally contrived and falsified.” One of those figures was duplicated in a PNAS paper, which also contained an image that had incorrectly labeled some proteins.”

        That seemed to be the obvious candidate.

        1. What do you mean by “repeated”? To repeat something you need to have done it before at least once. These people have either never run this experiment or have not gotten the results they wanted. Neither scenario bodes well for future attempts.

          1. at least they will report that the experiments repeated and conclusions are not changed…they will provide the (new?) data in the correction.

  5. hard to tell lgr, the “high” resolution figure is only 51k. I ran the ORI droplets over it and it looks like there might have been some cleaning, but very little. Nothing important appears to have been removed or created. I think this figure is probably ok.

        1. Those images in powerpoint are a bit weird!

          Some of the graphs in supplementary figures – are they inserted tiffs, jpegs? They seem to wiggle around alot such as in S5.

      1. Droplets look like an interesting option, but Photoshop isn’t cheap, and many of us don’t have time to create such subroutines. Are there any freeware options with similar programmable features? Has anyone published pre-programmed droplets for those (or for Photoshop)?

  6. The committee has recommended two corrections but this decision is by no means binding for the editors of the two journals. I think that a correction is a privilege that can be given for some HONEST mistakes only. It would be rather obscene to let the authors correct the two fraudulent papers, especially that they have been published in such high-impact journals. I say, retract and slap a publication ban on all the amnesiacs. After all, that’s what less widely read journals featured in RW do to their own artists.
    Similarly, the decision of the committee that the other two papers should remain standing and uncorrected should mean nothing to the editors of the two journals concerned. In my opinion, retracting them would not be an overreaction.

  7. “Donald Nicholson, the senior author on the papers and Merck’s vice-president and worldwide head of basic research in immunology and infectious diseases, would be handling the corrections. Saleh did a postdoc at Merck from 2001 to 2004”

    According to pubmed DW Nicholson has 132 publications.

    http://www.ncbi.nlm.nih.gov/pubmed?term=nicholson%20DW

    From the late 1990s until the mid-2000s he was extremely productive.

    Where’s the beginning of a thing?

  8. The unrelated corrigendum (link provided in the RW post above) is in itself quite revealing (“In our Reply to the Brief Communications Arising ‘Is BID required for NOD signalling?’ we made use of a figure generated by the authors of the Comment (Nature 488, E4–E6 (2012); doi:10.1038/nature11366) as part of the review process,”). It’s a corrigendum to their reply to concerns (http://www.nature.com/nature/journal/v488/n7412/full/nature11366.html) raised about their original paper by the group who provided Saleh’s group with the Bid KO mice. So can we trust the results?

    1. Similar remarkable results from these authors in another Immunity paper:

      Control of intestinal homeostasis, colitis, and colitis-associated colorectal cancer by the inflammatory caspases. Immunity. 2010 Mar 26;32(3):367-78. doi: 10.1016
      Dupaul-Chicoine J, Yeretssian G, Doiron K, Bergstrom KS, McIntire CR, LeBlanc PM, Meunier C, Turbide C, Gros P, Beauchemin N, Vallance BA, Saleh M.

      Fig. 3F

      http://www.cell.com/immunity/image/S1074-7613%2810%2900082-8?imageId=gr3&imageType=hiRes

      Fig. 5A

      http://www.cell.com/immunity/image/S1074-7613%2810%2900082-8?imageId=gr5&imageType=hiRes

      Lanes 13-18 of b-actin in Fig. 3F (labelled 0,5,8 Asc-/-) are very similar to the first 6 lanes of b-actin in Fig. 5A (labelled 0,5,8 WT). Even the Moire patterns look the same both underneath and above the actin bands.

      (You may need to change the levels or tilt your screen to see it clearly.)

      For the other b-actin bands (19-24 of Fig. 3F) the top half of the bands looks like the top half of lanes 7-12 the b-actin bands in Fig. 5A, but the bottoms of the bands look different, almost as if the dangling streaks had been removed.

  9. mb, those bands are very similar, too similar. The slight differences could be attributed the different background of the two images. I am not sure how to modify one image’s background noise to match the other, but I bet someone at the ORI does.

  10. That’s why a precautionary audit of other papers published by people found to have produced at least one fraudulent one should be performed.

  11. You guys are nibbling on the edges here. There is a much bigger problem with the 2009 Immunity paper. The cIAP1-null mouse strain used is inadvertantly null for caspase-11 (see “An inactivating caspase 11 passenger mutation originating from the 129 murine strain in mice targeted for c-IAP1”, Biochem J., 443:355) and likely caspase-1 as well (see “Non-canonical inflammasome activation targets caspase-11.” Nature, 479:117).

    Essentially, the mouse experiments using cIAP1-nulls and cIAP1/2 double nulls in this paper also contain caspase-11 and (likely) caspase-1 loss. Caspases-1 and 11 are critical for IL-1 production. Since the NOD pathway is critically linked to IL-1 and since IL-1 is critically important for intestinal inflammation, the findings in the entire paper could use re-examination.

    1. Kinda out of my field of expertise. I would hope that the appropriateness of the mouse model would be something a reviewer would catch.

      Every journal needs a comments section.

      1. how is a 5 bp deletion okay? It causes a truncation mutation that is immediately degraded leading to a complete loss of expression (from the Duckett paper: “Caspase 11 cannot be readily detected in resting cells, but is induced by inflammatory stimuli such as LPS [4]. Caspase 11 induction in wild-type MEFs was compared with matched c-IAP1−/− littermates, in which c-IAP1 deficiency was confirmed by immunoblot analysis (Figure 2A). When MEFs were exposed to bacterial LPS, wild-type cells demonstrated a robust induction of pro-caspase 11, as predicted, but c-IAP1−/− MEFs lacked detectable caspase 11 (Figure 2A). To determine whether this was due to the splicing defect predicted by the 5 bp deletion described previously [2], rather than a c-IAP1-dependent defect in the signalling pathway driving caspase 11 expression, RT–PCR was performed on RNA harvested from wild-type or c-IAP1−/− MEFs, following LPS stimulation. Primers designed to amplify full-length caspase 11 mRNA recovered only a truncated form of caspase 11 cDNA in c-IAP1−/− MEFs (Δ110) exposed to LPS, as compared with the wild-type controls (Figure 2B), consistent with c-IAP1-deficient MEFs harbouring the somatic mutation in their caspase 11-coding sequence (Figure 1C).”

        To reiterate, though…The caspase-1 null mouse has a deletion in the closely linked gene, caspase-11, that causes caspase-11 to be non-functional. The cIAP-1 mouse was made with 129 ES cells in which caspase-1 and caspase-11 are tightly linked to the cIAP1 locus. The cIAP1 mouse has been shown to also lose caspase-11, and it may possibly also lose caspase-1. The issue is that the cIAP1 and cIAP1/2 null mice used in the study also have lost caspase-11 and may have lost caspase-1. Given the importance of caspase-1 and caspase-11 in NOD signaling and in inflammatory bowel disease (2 main points of the manuscript), the data presented needs to be re-evaluated.

        In fairness to Dr. Saleh, though, this was discovered 1.5 years after her manuscript was published and many immunologists aren’t that careful with their mice – they sometimes just assume the mice are what they are purported to be.

        1. “many immunologists aren’t that careful with their mice”

          Is there something about “immunology”? Bigger than biology itself. Innumerable combinations and permutations, drug companies, the eternal promise of “manipulating the immune system”, very in tune with modern green quackery, “immunotherapy”…..?

          1. Let’s not be too harsh on Immunology…Vaccines have been pretty helpful to humanity.

            There is just a tendency to make a mouse that might have 6 eyes, 12 limbs, 4 kidneys and 1 lung and not report that – alot would just take the spleen and do flow cytometry.

        2. With respect, I am finding your presentation confusing.

          ” The cIAP1 mouse has been shown to also lose caspase-11, and it may possibly also lose caspase-1.”
          To repeat, I don’t see why you suggest a possible loss of caspase-1 should be the case, since the deletion in caspase-11 is only 5 bp.

          As regards the 2009 paper, the bir2 -/- has been shown to lack caspase 11, but bir3 -/- seems to have regained it, they claimed to have got the same results with both knock-outs and using RNAi on wt colonocytes. Although perhaps their inability to detect a difference between the lack and presence of caspase-11 is more reflective of a tendency to always find the results submitted in their grant proposals. If you look at figure 2 and the levels of IL 1B, you can’t help wondering if they knew about the caspase-11 deletion in bir2 -/-, they might have been happy for the levels in bir3 -/- to climb a bit higher.

          It would be interesting to know if their caspase12 -/- knock-out was affected. According to the Duckett paper it was not constructed in a 129 cell line. I am not sure how he knows this as all I can find in the nature paper is that it was contracted by a third party commercial firm. Perhaps I missed something or perhaps he was relying on a pers. comm. from the authors. Although, as we have seen this particular group of authors have unfortunately weak memories. And if they can’t even remember who ran particular westerns, then how can they be expected to recall precisely the exact strain of ES cells a third party used?

          I actually think there 2004 Nature paper is the real root of their problems. They hared off on a link to sepsis on a highly convenient sample of 38 African-Americans cases of severe sepsis. No one has directly repeated this study – at least according to my rather cursory search in pubmed – and a couple of groups looking at specific pathogens causing sepsis have not discovered a link.

          Yet although everyone seems to be struggling to find clinical findings that back up their 2004 Nature paper, they have ploughed ahead and found an impressive array of molecular and mouse model data showing that these findings must be so. Lets hope the clinicians catch up.

          1. Sorry…I misunderstood your question. While caspase-1 is not likely mutated in this strain, there are a number of reports from independent groups suggesting that caspase-11 can activate caspase-1. They are often in the same complexes/inflammasome. Thus, while the biochemistry is not completely worked out, there is evidence that when caspase-11 is deficient, caspase-1 activation/complex binding is altered. While its not a genetic loss, it may be a functional loss.

          2. I must have mentioned this somewhere else. How can anyone repeat the experiments – especially the ones which are published in high profile journals. Where they can publish? Will it be novel anymore. I think we need a mechanism – confirmation of an important paper should be given importance.

          3. Ressci Integrity, why would anyone want to repeat a study and then publish it? What would be the point?

            If experiments in published studies are interesting to the extent that they stimulate further work then they are likely to be repeated en route to other experimental aims. That happens all the time. Often the experiments are confirmed rather in passing as the published methodology is subsequently employed, and so confirmation is achieved within a field without an explicit effort to reproduce the work for the purpose of confirmation per se. Quite often a published experiment or methodology is repeated and found not to work and this may lead to some interesting investigation of the details of the experimental methodologies. Sometimes a published experiment or methodology is found not to work and it becomes apparent that the original analysis was incorrect. Occasionally a study is found to go sufficiently against the grain of accepted thought that annoyed scientists will undertake to reproduce it to show that it’s flawed..

            That’s pretty much how science progresses; in the general course of scientific endeavour important papers are confirmed (or not) because they stimulate follow up work by others. That’s really what “important” means in science. In reality the sort of confirmation you wish for does happen in the general flow of research…just somewhat more messily than a purist might envisage! Highlighting and prosecuting research misconduct should help make the situation a little less messy.

          4. Ressci.

            My proposal is based on the pre-revolutionary France practice of the Corvée. If you are in receipt of public funding or obtain public grants you need provide in return unpaid replication service. Lets say you get a grant for $100 000, you have to provided 100 person hours of service in trying to reproduce data of other labs – ideally done in a blinded fashion.
            So a central commission would assign you a protocol based on the known skill profile of your group, if possible getting the treatments in coded vials as well as +ve controls and instructions on what outputs to measure. You collect the data – without knowing what it is – back to the commission who then breaks the code and assembles it into a meaningful form
            Maybe the extreme blinding is a bit draconian and not really realistic when most methods can be quite fragile, but working some kind of random spot audit involving replication in another group would be a powerful disincentive to fudging – and also build confidence that the system is fair.

  12. In reply to Physician Scientist January 26, 2013 at 5:59 pm

    “Let’s not be too harsh on Immunology…Vaccines have been pretty helpful to humanity.”

    That’s not what most immunoligists do.

    1. Reply to chris January 26, 2013 at 8:14 pm

      I think that there is a need for more than one off papers, and some validation system, perhapsvalidation institutes. Why not jousting contests, rather than more and more publications?

      Your analysis makes sense, but it may be wishful thinking.
      What does “interesting” mean? Jumping on the band-waggon, the influence of money?
      I wouldn’t be so sure of the checking mechanism. You need the money to do it. There are many stories that when junior people could not repeat work of more senior people that they were blamed for not being able to repeat the work, or were simply ignored. Also there are stories that when junior people discovered something they were simply ignored. One story is that a Spanish postdoc at Yale may have discovered exons and introns, but was ignored by her top of the class at Harvard (or MIT) boss. The forces or conservatism are much stronger than perhaps you realize. People may believe all that Karl Popper/Thomas Kuhn stuff, but that is only for philosophy classes.

      “Occasionally a study is found to go sufficiently against the grain of accepted thought that annoyed scientists will undertake to reproduce it to show that it’s flawed.” is about it. Out of scientific interest could you name some examples? Mostly people keep adding to the pile. I was always taught that science progresses by showing that things are incorrect and removing them from the pile.

      Science may not progress so fast as it might.

      Stimulating follow up work as a measure of importance depends on the results of that work. Many bogus papers have been cited many times. These will also be “important” by stimulating follow up fruitless,misdirected work, sometimes for decades.

      Believing in the onwards, upwards, from the left bottom corner of the page to the right upper corner of the page is optimistic. Now “optimistic” has clappy-happy connotations, but it is not positive. There have been reverses. As people start to say “the 21st century is a repeat of the 19th century. In reverse”.

      1. Yes, Chris might have misunderstood my point. I meant repeat as reproducibility or validation. Otherwise, there will be a problem.

        1. One of the absurdities of the present system is that people who are wholly grant funded can’t actually check the reproducibility of work from other labs. They are typically only allowed to go after the latest trendy high impact stuff. E.g. “The HFSP supports novel, innovative and interdisciplinary ……”.

          The solution is for granting agencies – all of them – to allocate say 15 – 20% of any funded project to verification. As well as making it affordable, it would send a signal that the agencies were serious about allowing the scientific process to work in the way that it should. It would also allow the time needed for the bench workers to develop their skills before jumping into the hot new stuff.

          1. you are right..If it is not novel, there is no value in the system. Even reviewers have the similar comments. If the study is not novel – hard to publish…who cares whether the novelty is really novel…some cases only after some one finds out irregularities in the novel paper…

          2. Correct. The key word often is “transformative” research. As if it was possible to plan transformative research over a timeframe of 3- years. You get this kind of results in the long run by funding good people doing solid research, not by decree…

        2. No I don’t think I missed your point Ressci. You (and Fernando and littlegreyrabbit above) are implying some sort of bureaucracy (e.g. “validation institute”!) in which confirmations per se of “important” scientific papers are done under the direction, presumably, of some committee(s) which decide(s) what is and isn’t “important” and directs other scientists to take on confirmations according to their directives.

          That seems pointless to me and would pretty soon descend into a mess I suspect. It’s not always straightforward to reproduce other’s experiments; it takes a certain commitment that comes of having a real interest in getting something to work in the context of one’s own research aims, and in my experience this may involve going back to the original lab for advice or sending a student or postdoc to learn the methodologies. It also sounds like a system that would be just made for politicisation, point scoring and back stabbing as “inconvenient” research is targeted for rounds of “verification”.

          And it’s unnecessary. Addressing some of Fernando’s points:

          “Interesting” means interesting. If someone publishes the isolation and sequencing of a novel appetite suppressant hormone and I happen to work in the field of metabolic regulation I’m very likely to be sufficiently interested to isolate (or synthesise) the hormone myself and see how it behaves in my particular experimental systems. In doing so I would effectively verify the original work (the physiological property of the hormone and its sequence). If there was a problem with the original work it would quickly become evident. That’s pretty much how things work in my experience. The idea that I might be working hard to address some other study and some bureaucrat tells me that I have to devote 15-20% of my research effort to attempting to repeat someone else’s experiments that I have no interest in, is unlikely to be conducive to my doing a particularly good job. And if I can’t “reproduce” the work according to the demands of the committee, what then?

          I’m not going to address your anecdotal stories about Spanish postdocs in Yale being ignored by Harvard/MIT educated bosses.

          There are dozens of examples of what you request. The recent “arsenate replacing phosphate in DNA” is an example of “against the grain” papers being pursued by highly sceptical scientists. I take your point about “philosophy class” but I don’t think you need to discard Popper, Kuhn (I’d also throw in Feyerabend) et al in relation to real life science. After all the arsenate story involved the potential overthrow (“paradigm shift”) of decades of understanding (“normal science”) of the nature of phosphodiester bonds and the chemistry of arsenates (Kuhn), the arsenate hypothesis was eminently falsifiable (Popper) as it so turned out, and the behaviour of the participants was delightfully Fayerabendian (biochemists submit a paper describing experiments that they must have known were insufficiently characterized and controlled; journal provocatively publishes it).

          Another example would be the publication of a paper that purported to show that rather well-supported physics (“normal science”) relating to feedback in response to Earth surface warming was incorrectly characterized (“paradigm shift”) [Geophys. Res. Lett. 36, L16705 (2009)] and the responses [e.g. Geophys. Res. Lett. 37, L03702 (2010)] that showed that the original work was junk (Popperian falsification). I could list loads more. At a less celeb level this is quite a common approach to “reproduction” of previous work to assess its flaws. I’ve published at least a couple of papers in which we’ve repeated experiments in published papers whose interpretations we didn’t agree with, and found that the reported experiments were indeed reproducible but upon deeper analysis that the experimental outcomes were consistent with our particular theory (and so their experiments get a published “verification” by us, even if we reinterpret their conclusions in our paper). That’s quite common in my experience.

          I don’t think it matters that much if “bogus” work is subsequently cited. Usually bogus work is known to be bogus by knowledgeable practitioners in the field. I also don’t believe there are many examples of “bogus” work stimulating decades of fruitless, misdirected work. Can you give some examples? Of course one could consider, for example, that there may have been decades of misanalysis in relation to science on stomach ulceration before Barry Marshall’s discoveries, but that’s simply tough. String theory may well turn out to be a tedious and unenlightening fancy that has dominated physics for three decades to little constructive effect. Bad luck. As you suggest we don’t live in an ideal happy-clappy world, and useful knowledge can often be hard to acquire.. The idea that we can speed things along by setting up additional levels of bureaucracy to force scientists to occupy large chunks of their time in fruitless endeavours to redo experiments in published papers according to some directives by committees doesn’t sound like a great idea to me..

          1. In reply to chris January 27, 2013 at 11:04 am

            “If there was a problem with the original work it would quickly become evident.”
            How do you know? The best fakers may never be found out.

            As I understand from general reading it takes years for things for people doing these things to be brought to account. Is it that the problems quickly become evident and others keep quiet?

            “descend into a mess”. We are already there.

            “system that would be just made for politicisation”. The present.

            “inconvenient” research is targeted for rounds of “verification”. What’s wrong with that? If it is correct it will stand up. The sooner we know the better.

            “It’s not always straightforward to reproduce other’s experiments”. A statement.

            “it takes a certain commitment that comes of having a real interest in getting something to work in the context of one’s own research aims”. The issue was one of validation.

            Where does this come into it “The idea that I might be working hard to address some other study and some bureaucrat tells me that I have to devote 15-20% of my research effort to attempting to repeat someone else’s experiments that I have no interest in, is unlikely to be conducive to my doing a particularly good job. And if I can’t “reproduce” the work according to the demands of the committee, what then?”

          2. In reply to chris January 27, 2013 at 11:04 am

            “I also don’t believe there are many examples of “bogus” work stimulating decades of fruitless, misdirected work.” Is a belief.

            “Can you give some examples?”

            Take a look at the papers I mentioned in the comments here:

            http://www.retractionwatch.com/2013/01/22/clare-francis-scores-a-bullseye-journal-of-cell-biology-paper-retracted-for-image-manipulation/

            E Clementi lineage.

            1. Proc Natl Acad Sci U S A. 2004 Nov 23;101(47):16507-12.
            2. Science. 2005 Oct 14;310(5746):314-7.

            G Cossu lineage:-

            1. Nat Cell Biol. 2007 Mar;9(3):255-67.
            2. Nature. 2006 Nov 30;444(7119):574-9.
            3. Stem Cells. 2009 Jan;27(1):157-64.
            4. Nat Med. 2008 Sep;14(9):973-8.
            5. PLoS One. 2008 Sep 16;3(9):e3223.
            6. Sci Transl Med 4,140ra89 (2012).
            7 Nature. 2007 Dec 20;450(7173): discussion E23-5.
            8. Skelet Muscle. 2012 Nov 26;2(1):24.
            9. Stem Cells. 2006 Apr;24(4):825-34

          3. In reply to chris January 27, 2013 at 11:04 am

            Take a look at this one.

            Hum Mol Genet. 2000 Jul 22;9(12):1843-52.

          4. 1. re: chris: “If there was a problem with the original work it would quickly become evident.”
            fernando: How do you know? The best fakers may never be found out.

            The example I gave was a paper describing the isolation and sequencing of a novel appetite suppressant hormone. That impacts my research and I’m interested enough to see how this thing might work in my system .So I reproduce the isolation and sequencing (or I synthesise the peptide de novo). Yes the peptide has correct sequence and has appetite suppressant activity. The work is validated….or no…I can’t isolate the peptide…or I isolate it but they’ve messed up the sequencing or something.

            That’s not difficult to understand is it? I can’t think a context in which your response makes sense. Either I reproduce the work and so it’s validated, or I can’t and it’s clear that there’s a problem somewhere. What has “best fakers” got to do with that example?

            2. Reproducibility. In my experience it can be devillishly difficult to reproduce other’s work. On several occasions I’ve been able to get something to work only after visiting the lab that originally published the work and gone through the methodology there (Of course I could have simply set up a blog and accused those researchers of chicanery!) I know I’m not the only scientist who’s experienced the unhappy situtation of an experiment that has worked nicely over the years, deciding not to work, or of one PhD student being able to reproduce the methodologies of a previous PhD student until you invite student A back to show how it’s done…

            3. Your last paragraph. I was addressing the suggestions (littlegreyrabbit and scrutineer) that grants are only awarded subject to the proviso that a significant proportion of the effort (e.g. “15-20%) is spent on verification/validation analyses.

          5. ” I take your point about “philosophy class” but I don’t think you need to discard Popper, Kuhn (I’d also throw in Feyerabend) et al in relation to real life science.”

            Chris, I would have thought Feyerabend took a view of science that you would not be favorably disposed to. Personally I enjoy reading him, but I don’t take him too seriously – more entertainment from his intentionally provocative positions.

            Can you give us some more detail about what precise aspects of Feyerabend’s philosophy you think contributes to this issue?

          6. Re: Fayerabend: I’m not sure that “precise aspects” is a particularly appropriate term when considering Fayerabend, but what I think is useful is the recognition that science is done by people, that scientists are by and large not terribly different from everyone else, and that the rather idealized notions concerning the “scientific method” can give an inaccurate picture of the way productive science is done in the real world.

            That’s not to say that I don’t think that reproducibility is a fundamental element of scientific progression, that falsifiability is a particularly useful concept, and that one should take especial care in designing experiments and ensuring that meaningful controls and good stats are performed where appropriate, and that very careful consideration of alternative explanations are considered before committing one’s lovely data and interpretations to print (not to mention not splicing and Photoshopping one’s westerns).

            I would have thought that the arsenate example I gave had some Fayerabendian elements, and I don’t think we’re really any the worse for that (I’ve learned rather more about arsenate chemistry than I would otherwise have done). Of course Fayerabend is more useful when considering the progression of a field from a historical perspective, rather than as a guide on how to do science!

        3. I just don’t think that’s true Scrutineer. If the work of other scientists impacts the work that one is doing under the constraints of a defined programme of research within a grant then it’s entirely appropriate that one might reproduce that research in the course of taking your own research towards its aims.

          For example, if you’re working on developing a potentially therapeutic inhibitor of an enzyme and another group works out how to crystallize the enzyme and publishes a structure, you are very likely to suggest to your crystallographer colleagues that it would be a good idea to crystallize the enzyme oneself, to see whether one can make inhibitor-enzyme cocrystals to define the nature of the binding site so as to improve the design of your potential therapeutic and so on. You’d be pretty foolish not to. If this involved a significant redirection of your project then it would be normal to inform that grant awarding body that you consider it beneficial to the outcome of the grant to pursue this line of work. That’s completely normal.

          1. By reproducibility we usually mean repeating the exact, same, experiments. (Given the “real world” proviso that repeating experiments is usually somewhat approximate as the information to accurately repeat them is incomplete.) There is almost never funding available to do this.

            What you describe is sensible and might contribute to Popperian-style refutation but it is still an attempt to go forward. The crystallographers would be attempting to define the conditions to crystallise said complex – this is new research, building upon the prior reports. If the crystallographers can’t crystallise the complex, they can say nothing about the validity of the earlier work. If it is fraud, they unfortunately will never crystallise the fictitious complex – however trial and error crystallisation trials don’t address that issue. But it may nip in the bud the careers of nascent structural biologists before they can get going. I wonder if this has ever happened before?

            My suggestion is that grants be expected to routinely fund reproducibility experiments. This has the corollary of helping to protect the poor unfortunate who can’t reproduce the earlier results. If I, as the grant holder, have a deliverable to report on reproducibility and my bench grunt can’t reproduce, I will look very carefully at the lab books, think hard about the experimental conditions and the controls. In fact, I would think a lot harder about the failing experimental science than when things are going well (for I am weak and easily overexcited), because if my people can’t reproduce some prior work, future deliverables of the grant will almost certainly need to be revised. Needing to write my report, I will switch into top guidance mode and try to work out what the key experiments are to nail the reproducibility issue. These will go into my grant reporting because future funding is dependent on it, even if they never make the refereed literature. The point is to reinstate scientific method in science.

  13. Note: Historically, splicing was accepted in some labs, but…In Saleh’s first published paper, I find it disturbing while some figures (e.g. Figure 3B) have lines that indicates splicing, in a figure like 4B it’s pretty obvious what is going on, however there is no indication that this has been spliced. It’s open access:
    http://www.ncbi.nlm.nih.gov/pubmed/11046157

    1. I agree that in figure 4B it is obvious that there is splicing between lanes 1 and 2 , and between lanes 4 and 5 (vertical, abrupt changes in background), and that it has not been marked as spliced. Even when people mark something as spliced it does not take away from the fact that it is spliced.

      2000 was a bit late to be doing that sort of thing.

      1. And as in most cases by labs that practice(d) this unscientific copy and paste technique, the control lane is of course one of the spliced-in lanes which of course disqualify you from saying anything about the comparison with the other lanes.

  14. Don’t know about this one – if it is an issue it will need a more sophisticated analysis than I can do. But since this group have form in splicing, can I very tentatively suggest::
    J Immunol. 2010 Nov 1;185(9):5495-502. doi: 10.4049/jimmunol.1002517. Epub 2010 Sep 27.
    Caspase-12 dampens the immune response to malaria independently of the inflammasome by targeting NF-kappaB signaling.

    The paper is bedeviled with the same interpretative problems as outlined by Physician Scientist – namely at least the Casp1 -/- is also Casp11 -/- – and I would suggest there is still the possibility that Casp12 -/- is also.
    http://www.jimmunol.org/content/185/9/5495/F4.large.jpg
    In figure 6A, is there a very faint line seen descending from beneath the 7th lane, left hand side?. With a lot to drink and some squinting and wearing 3D movie glasses I convinced myself it might continue the actin bands.
    The lanes of the bottom 2 panels of proteins don’t seem to line up with the top 3 panels. In particular pERK has a bit of a smile which is lacking in ERK. Actin normalized quantification is critical to the very weak finding this blot is showing. So if the actin blots come elsewhere – as Michael Briggs has pointed out they have tendency to do – then this figured is kaputt

    1. J Immunol. 2010 Nov 1;185(9):5495-502. doi: 10.4049/jimmunol.1002517. Epub 2010 Sep 27.
      Caspase-12 dampens the immune response to malaria independently of the inflammasome by targeting NF-kappaB signalling.

      Figure 2A, upper panel, clearly looks like it was manipulated. The background for three bands is lighter and there is a 9×8 pixel square with no noise (constant grey value).
      http://www.jimmunol.org/content/185/9/5495/F2.large.jpg

  15. I just hope some good comes of this. Hopefully, she learns from this and is a productive investigator in the future.

    1. It’s interesting to read from the comments here that cIAP1-/- are also caspase 11-/-, since some recent papers published after the Biochem J article do not address that issue at all.

  16. This is the real world.

    http://publications.mcgill.ca/medenews/2011/10/12/dr-maya-saleh-awarded-the-prix-andre-dupont-from-the-crcq-drs-cristian-oflaherty-and-jacques-lapointe-also-honoured/

    Leaving aside any questions about who is responsible, did the selection committees for the prizes look at the papers? I imagine that was a major criterion, but I could be wrong. If so which papers were included?

    http://publications.mcgill.ca/medenews/2011/06/22/recognizing-women-pioneers-in-medicine/

    Did Dr. Richard I Levin look at the papers?

    “With a growing library of awards from FRSQ, CIHR and others, major funding successes in her name and recognition on both North American and European research stages, Maya Saleh is an exemplar of excellence among women and among all those inspired to push past conventional boundaries in the Faculty, in McGill and internationally,” said Vice-Principal of Health Affairs and Dean of Medicine, Dr. Richard I. Levin.

  17. In Canada there is nothing analagous to ORI. Our 3 major government granting agencies (health, science and engineering, and social science) have issued guidelines for acceptable behaviour. While the granting agencies have the ability to ban individuals, in reality the universities are expected to do the investigations and enforce the punishments. The report issued by McGill is likely the end of the matter academically. There are several Canadian Scientists with blogs where these issues and the resulting concerns have been discussed.

    We’ll see what the journals have to say about this work.

  18. I do not think the idea of requiring other scientists to verify prominant results can work. The likely outcome would be that the “big guys” would still go about business as usual and that the less successful would be burdened by this exercise. This problem arises because the labs that produce unreproducible data can’t even be bothered to institute quality control within their own labs. Many scientists have attempted to reproduce published data without success – some have even published their results – without consequence to the original laboratory. THere are too many variables to prove deception/poor practise.

    Fraud is not likely the most common reason for unreproducible data (bad data is just too common). Much of it is likely just bad practise.

    If there are no consequences for producing poor science, then having a bunch of overworked mid-level scientists trying to reproduce the superstar’s results is a complete waste of potential.

    Do you propose to shutdown labs that produce sloppy results? Limit the number of postdocs a superstar can oversee? Do you expect to put a “couldn’t reproduce” label on papers? or append a “couldn’t reproduce” document to grants? Sounds like a job creation program for lawyers who would happily spend all the biomedical research money available to make sure this never happened.

    So, if you can’t do anything ……

    1. ” THere are too many variables to prove deception/poor practise.”

      That doesn’t matter. It doesn’t have to have a punitive focus. It would probably be even better if it operated on an automatic assumption of no malfeasance, unless incontrovertible evidence emerged to the contrary.

      But it would
      a.) Raise awareness of the issue. I am still smarting from an accusation that I went to a weird university that someone here had the temerity to make. Whereas in fact my university operates along the exact same lines as Harvard, McGill, Cambridge and UCL (to name just a few).
      b.) Remove bad data and bad interpretations faster. The core of this work on caspase 12 is actually very interesting. Namely that there is a functional caspase 12 in some Africans and Tamils (and maybe other groups?), but generally it seems to have been selected against in humans although not in other species. That discovery just by itself probably merited a Nature paper. Unfortunately they then added an implausibly strong effect on sepsis built on a very small case control study and in every single paper since they have attempted to nail this interpretation deeper and deeper. By doing so they may have prevented better understandings being developed.

    2. In reply to ROB January 27, 2013 at 10:54 pm

      I agree with what you are saying. You’ve spotting the issue of power.

      I think that chirality first mentioned the idea of an audit of papers when somebody is found to have committed scientific misconduct. Why not have such an audit of papers once somebody reaches 50, 100, 200…papers.
      This could be at the level of looking at the images in the papers (the nearest you get to primary data). The images can be scored for things that most people consider unacceptable. In the end we must be able to score things otherwise it is all voodoo (no offence to practitioners of voodoo). To go through the images in 50 papers takes a couple of days. During this time you will start to notice any image reuse between papers, which is more difficult to explain.

      I know that the piper always calls the tune, but it may be possible to have an independent body to do the audits.

      http://scienceimageintegrity.org/

      You hit 50 on pubmed. Within 6 months you need to submit them for audit. You, your institute, pays 1000-2000 dollars (a lawyer will cost you that for a few hours).

      If serious issues are found then the analysis goes deeper.

      This might stop some people from making themsleves bigger targets.

      What I have noticed is that when retractions appear on this blog you can relatively easily notice image manpulation/misuse quite early on in the careers of the authors. If the stuff is not real people will have to make some things up in order to get past reviewers.

      1. I don’t think it is wise to get focused on image fraud. I have yet to see a case of image fraud that I wasn’t confident that I could have got the exact same output even assuming the entire scientific claim the figure was making without going near photoshop.
        I used to work in a lab that prided itself on the ability to turn out spurious but undetectable data.
        The possibilities are endless, run your actin controls and your protein of interest on different gels. Spike with a commercial prep of your protein of interest. Don’t provide the conditions you say you did – doing a time course? Do the time course for the wt as described, but use only half the time periods for the knock-out. Add a commercial kinase and/or phophatase to alter phosphorylation status as required. You only do this AFTER you have checked whether your grant proposal idea actually worked or not. But why let a piffling thing like reality get in the way of your career?

        Just to prove the utility of replication, did anyone click on the corrigendum supposedly unrelated? Unrelated or not, it was highly relevant. Corrigendum was sent in after another group had responded to an earlier Nature paper of 2011
        http://www.nature.com/nature/journal/v488/n7412/full/nature11366.html
        In it they quite unambiguously state they were completely unable to replicate a paper of the Saleh group
        ” Using the same strain of Bid−/− mice used by Yeretssian et al.1, we found that the mice responded like wild-type mice to NOD ligands, and that the levels of NF-κB or ERK activation and cytokine secretion from Bid−/− BMDMs were indistinguishable from the wild-type response. .”

        Do which you may say: Ah ha, that proves the scientific record is self correcting after all. To which I would reply no because
        a) Saleh et al just wrote back saying they had repeated it again and the results were even better than last time. Besides which one of Saleh’s co-authors at La Jolla had suddenly been seized with the urge to purify NOD1 and show that it really did bind BID.
        b) It presumably could only happen because the dissenting group was in Australia and didn’t fish from the same patronage pool as Saleh et al.
        c) It could only happen because the dissenting group had the knock out mouse already on hand.

        The literature didn’t self correct, although it is not impossible it may have been the trigger for whatever chain of events caused McGill to set up an investigative committee.

        1. “I don’t think it is wise to get focused on image fraud.” Who is wise?

          It is something which we can see.

          “I have yet to see a case of image fraud that I wasn’t confident that I could have got the exact same output even assuming the entire scientific claim the figure was making without going near photoshop.”

          I don’t understand the logic.

          1. I’m with Fernando on this. Just because one could have got away with a piece of fraud in a way that was undetectable doesn’t mean we should ignore it. Think about criminology (that’s what we’re talking about here) – people get caught because when they commit the crime they are alone, assume no one will ever find out and therefore don’t cover their tracks well. I understand what littlegreyrabbit is saying but I actually believe it’s quite hard (psychologically) to fabricate a gel image from the outset. To do so would involve throwing the entire book of sanity / morality out of the window, and it would also render all lab work that you do totally meaningless. I think people who spend time working on something ‘for show’ and then fabricate the results entirely are relatively rare (although RW has covered some of them e.g. Roman-Gomez and the poached images from other papers).

            What we are seeing with Saleh’s Nature paper looks to me more like the end of a series of steps, starting with ‘cleaning up’ of images, but it’s a slippery slope – once you start doing that, where do you stop with the ‘fiddling’? Image fraud is a window onto the wider subject of data falsification, and worth discussing because it provides a kind of ‘positive control’ to assess how authors, institutions and journals behave when there is indisputable evidence of misconduct (as in this case). And in this case the institution seems to have bungled their task badly by stating that ‘the paper can stand’.

        2. in reply to littlegreyrabbit January 28, 2013 at 7:34 am

          “I used to work in a lab that prided itself on the ability to turn out spurious but undetectable data.”

          Please post a few of these papers and we can put it to the test.

        3. The notion of literature “self correction” is a mis-nomer (a better term might be “gets there in the end”). However you define the phenomenon you have to give things a chance to work themselves out. Saleh et al provided evidence that Bid is required for Nod1-mediated signaling, and the Australian group presented evidence that Nod1 signaling in their hands can occur in the absence of Bid. In the real world Bid either is or isn’t an enhancer of Nod1 and no doubt if this issue is sufficiently important, this will work itself out, the relevant papers will be published and for that particular issue the literature will have “self-corrected”. That’s how things work in my experience.

          One might note in passing that there is a difference between Bid being required for Nod1 signaling and Bid being an enhancer of Nod1 signalling and perhaps Saleh et al and the La Jolla group are pulling back a little from their original claim..

          I doubt the La Jolla group were “seized with urge(s)” any more than any of us are seized with urges in the day to day progression of our studies. They’ve been working on Nod1-mediated signaling pathways for at least 10 years, and have recently managed the difficult task of overexpressing and purifying functional protein. That should make a huge advance in their ability to study the mechanisms involved in Nod1 regulation and it seems like a pretty obvious contribution to this to assess whether functional Nod1 interacts directly with Bid. Hard work is a much more fruitful approach to scientific progress than cynicism…

          1. “Hard work is a much more fruitful approach to scientific progress than cynicism…”

            Crossed threads, I don’t disagree, but many people work hard all the time and it doesn’t get them anywhere. I would not be an exclusive thinker. What about hard work and some cynicism? Too much cynicism becomes corrosive, convoluted and nihilistic. Been there, done that…no solutions proffered.
            Hard work and critical thinking is what I want to say.

          2. “recently managed the difficult task of overexpressing and purifying functional protein”
            Do you have any indication to suggest it was a difficult task. It looked fairly straight forward – provided over-expressing NOD1 didn’t cause the cells any deleterious effects. I can’t see anything in the methods that they had to do anything out of the ordinary.

            Was that it was presented in response to another group not being able to replicate a paper – and in my experience in such situations people tend to be very sure of themselves and believe the issue of sufficient importance before going into print – yet in their response they did not indicate that this group was a close collaborator.
            Since the data was so useful, lets look at the figure in question
            .http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3249515/figure/F7/

            Of course it would be incredibly easy to fake this, if someone was so minded. Just spike a bit of your purified NOD1 into the relevant lane. As it is, assuming that the probing antibody affinities were the same (and they both should be good being for FLAG and His epitopes respectively), the blocking of the membrane was identical and the procedure was carried identically for the two antibodies – the signal for BID seems a lot stronger than NOD1 (background a lot whiter, suggesting a much shorter exposure). This despite the fact the complex was being precipitated by antibodies against NOD1. But this kind of guesswork is worth nothing really, which is why there is such value in 3rd party validation – for their own benefit as well, since question marks must surely be hanging over this group.

            Immunoprecipitation ought to be sufficiently robust if they weren’t cheating and if they were cheating they could probably manipulate any method. Still, it would have been nice to see an association demonstrated through at least one other method. Either through the S200 column or maybe through sedimentation equilibrium – although since there is such disparity in the size of the two peptides 105 kDa ver 17-25 kDa, maybe the alteration in elution or sedimentation profile of NOD1 would not be significant.
            The gold standard would probably be using NMR and an N or H labelled BID peptide and see if you can define a binding face on BID. Not as unrealistic as it might sound since they have already done such studies on BID before, so they have both the techniques and should have assigned peaks already.

            I don’t see literature self-correcting and I don’t see a clearer picture on which group, the Melbourne or Canadian is right in this instance emerging – I expect both will just continue to muddle along. What I see is a situation where the noise to signal ratio in biomedical research rises slowly but inexorably until steps are taken to reinject rigor into the discipline

  19. Does anyone know what the actual issue is with the 2009 Immunity paper? I know about the caspase-11 problem, but what was brought to McGill’s attention? This is directly relevant to our work, and my post-doc has been unable to find the issue after looking at the blots at hi-res.

    1. I couldn’t turn down that challenge :)…

      As described by Dr. Brookes at Science Fraud for the Cell Host & Microbe paper, the image manipulation of choice in this series of papers appears to be the Photoshop “Clone Stamp” tool. In the Immunity paper, take a look at the background of the lower left blot in Figure 3B: there is a distinctive hook-shaped mark and a line below that are repeated. It is very likely that this is the clone stamp in action, used to cover up some unwanted feature of this blot.

      1. Now at least I know what “Clone Stamp” is. I’m pretty good at catching duplicate blots in manuscripts I review (likely 1 in 100 manuscripts), but there’s no way I’d catch something this subtle on review.

        The scary thing with the mouse histology is that there’s no way to know that they are truly showing a “representative example.”

          1. Wow. The rule in my lab is that if it requires more than simple linear brightness/contrast on the whole image, you rerun the experiment.

            Although, as my kids get older, I’m realizing that I might not be as up on technology as I used to be. I can honestly see how a PI not up on the graphics and who didn’t look at the primary data could be fooled. This “clone stamp” technique is pretty tricky.

  20. In Ivan’s update he quotes associate provost Lydia White as saying “The relevant findings were transmitted to the editors of Nature and the Proceedings of the National Academy of Sciences and the authors are in the process of correcting the record.”
    Were the editors of these journals told that the figures in Nature had been “intentionally contrived and falsified”? Or did they just try to publish (mega-)corrections?
    When journal editors are deciding whether to do nothing, publish corrections, or retract papers, the findings that some of the figures were intentionally falsified would seem to be relevant to their decisions.
    It would be very interesting to hear from the journal editors what they have been told.

  21. Since there is a queue of Saleh’s group papers, can I add this one. Figure 6 A
    http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3234874/figure/F6/

    The Mature IL 1B strip is cut in an odd shape. I can’t see any obvious marks that suggests anything untoward – but I wasn’t able to spot all the issues the McGill committee identified either. Perhaps this odd cut is indication of something going on, perhaps it isn’t. Its a free world, they can cut their strips in any shape they want.

    I don’t why they don’t include the entire 11 kDa and 9 kDa strip either – unless the full strip never existed. Which would suggest the strips above may have been spliced.

  22. What looks like splicing a pure grey background along side a portion of a blot
    http://img69.imageshack.us/img69/5237/temppd.jpg – red arrow marks the splice.
    This is from Figure 3c, strip: WB IKK-Beta
    in Non-apoptotic role of BID in inflammation and innate immunity, Nature 2011 Yeretssian et al.

    I don’t think this is the only case in this paper, but it is the clearest one and furthest away from a band so as not to have to worry about compression artefacts.

  23. In reply to michaelbriggs January 31, 2013 at 10:28 pm

    What I see in Nature 1064-1068(20 April 2006) Fig. 4c middle box labeled Total Casp-12 is that the right end of the upper band in the 1st (left-most) lane is straight and vertical with a thin white edge to it. I think that some manipulation happened there.

    The upper band in the 2nd lane looks very much like the upper band in the 1st lane. The same mottling within the band can be seen,for example, the diagonal light streak near their left ends. The superimposable parts of the upper bands in the 3rd and 4th lanes also have the same mottling pattern. I don’t think that there is any reason that should happen naturally.

    The lower bands in the 2nd, 3rd and 4th lanes, including the little diagonal streak above and just to the left of their right ends look the same. The left end of the lower band look like the left end of the lower bands in the 2nd, 3rd and 4th lanes.

    The superimposable parts of the middle bands in the 3rd and 4th lanes look the same to me. The superimposable part of the middle band in the 2nd lane are also the same as in the 3rd and 4th lanes.
    The difference is that just under the right end of the middle band in the 2nd lane the edge comes further in.
    You think that there is a difference, but it is simply that that part is no longer there.

    The mottling in the left half of the middle band looks like the mottling in the left halves of the middle bands in the 2nd, 3rd, and 4th lanes. The right half of the middle band in the 1st lane is smoother and the little diagonal streak has become 3 little, diagonal streaks.

    That makes quite a few things which I think are unnatural just within that box. Nature needs to consider physics.

  24. 3 months have passed – radio silence from McGill and Nature – what’s happening?

    Are Nature insisting on retraction?

    Was that a pig I just saw preparing for takeoff?

    1. “Correction” in Nature.
      http://www.nature.com/nature/journal/vaop/ncurrent/full/nature12181.html

      …even though the figures in the original Nature paper had been “intentionally contrived and falsified”, and Nature’s policy, as stated in a recent editorial is “If an institution’s report concludes that misconduct occurred, we usually insist on retraction — and will issue the retraction ourselves if the authors refuse
      to comply.”

      1. Nature are following the recommendations of the McGill committee of which it is stated:

        The Immunity and Cell Host and Microbe papers could stand, said the committee, which recommended corrections for the Nature and PNAS papers.”

          1. In the Corrigendum, they state “We re-probed the original western blot in Fig. 4b with anti-tubulin” but in the corrigendum Fig. 4b the tubulin bands seem to have a different shape and spacing to the p17 bands on original blot shown in Fig. 4b.
            Although the images are at not shown at resolution adequate for proper assessment, the band in lane 4 of the lowest (Control Casp1,-5,-9) panel looks odd, and has a very sharp left edge.

  25. Another one for the “Thesaurus of euphemisms” I have just posted on. When someone finally gets to work on this tome, it is going to be very substantial…

  26. In reply to michaelhbriggs (May 30, 2013 at 9:22 am):

    Indeed I noticed the sharp edge on the “Control Casp1,-5,-9” blot too. If you adjust the contrast, you can see that this sharp line extends to the top and bottom of the blot: http://imageshack.us/photo/my-images/849/splicing.jpg/

    It certainly appears that the last band was spliced onto the blot. Quite unbelievable – I would have thought that, given the opportunity to submit a correction rather than have the paper retracted, the authors would have made sure the corrected figures were beyond reproach…

    1. It’s ghastly!

      Observe too that all those 4C gels have been provided at reduced resolution. Witness the square grey “pixels” in the backgrounds. These are supposed to be improvements?

      And as for the 4B “controls” that offend Briggs because they don’t match the original gel, well they don’t even match each other 🙁 The left quartet come from a gel made with a comb that has bigger gaps between the teeth compared to the right quartet: band to gap ratios are incompatible. Also the left quartet have band curvature and a downwards smear gel artefact, neither shared by those to the right. Counting Briggs original complaint, that is four things wrong with this blot “slice”.

      In situations needing major fixes like these, why doesn’t Nature ask for high resolution originals to place in the supplement? They are much harder to fix by image fabrication. Perpetrators would at least have to do another experiment and even if that was doped, it makes them work harder and actually spend some lab money…

  27. In response to your quote: “a bit puzzled that you are not reporting this case to Canadian funding agencies.” I think the following information might be of help.

    Website:
    http://www.rcr.ethics.gc.ca/eng/policy-politique/framework-cadre/#11

    Tri-Agency Research Integrity Policy

    The Tri-Agency Research Integrity Policy (the Policy) is a joint policy of the Canadian Institutes of Health Research (CIHR), the Natural Sciences and Engineering Research Council (NSERC), and the Social Sciences and Humanities Research Council (SSHRC) (the Agencies).

    The person who made the initial allegation could have (perhaps still can) forwarded a copy of the complaint to this agency. Sections 3.2 sates: “Responsible allegations, or information related to responsible allegations, should be sent directly to the Institution’s designated point of contact, in writing, with an exact copy sent to SRCR.”

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.