In baseball, it’s three strikes and you’re out. In Nature, apparently, you can stay at the plate after three swings-and-misses.
That’s what we concluded from a Corrigendum in last week’s issue, for “CD95 promotes tumour growth,” originally published in May 2010 and now corrected not once, not twice, but three times.
Here was the first Corrigendum, from March 2011:
In this Letter, an experimental error affected the western blot analyses of mouse liver extracts shown in Fig. 4f and g. The secondary antibody cross-reacted with endogenous mouse IgG in the tissue lysates, resulting in incorrect bands. The experiments were repeated using different primary antibodies and a secondary antibody that showed no such cross-reactivity. Although there was a trend, the reduction of p-JNK and p-Jun in the livers of CD95-deficient mice in Fig. 4f was not statistically significant. In addition, although the increase in phosphorylated JNK and Jun in Jo2-injected mice was reproducible in Fig. 4g, Jun protein was also increased. Corrected versions of Fig. 4f and g are shown below. The Supplementary Methods have been updated to reflect the change in the use of antibodies, and Supplementary Fig. 8b has also been corrected. The corrected figures do not alter the overall conclusions of this Letter, and all other data still stand.
A June 2011 Corrigendum corrected that original Corrigendum:
In our recent Corrigendum (Nature 471, 254 (2011); doi:10.1038/nature09897), Fig. 4g inadvertently contained three incorrect panels. The corrected Fig. 4g is shown below. This mistake does not alter the overall conclusions of this Letter.
And here was the most recent Corrigendum, from last week:
In Fig. 1f of the original Letter, an incorrect actin blot was published: see the corrected panel in Fig. 1 of this Corrigendum. Also, in the original Supplementary Fig. 12c, some of the western blot data were either misinterpreted or raw data could not be located. We have now repeated the entire experiment: see Supplementary Information to this Corrigendum for the corrected Supplementary Fig. 12c. Although there are differences between the different experiments, the increase in phosphorylation of JNK and Jun was reproducible, confirming that stimulation of CD95 causes activation of JNK. All the conclusions of the original Letter are intact except for the data for the original Fig. 4f and g on the phosphorylation level of c-Jun and JNK in the livers of CD95-deficient mice, which have been corrected in two previous Corrigenda: Nature 471, 254 (2011); doi:10.1038/nature09897 and Nature 475, 254 (2011); doi:10.1038/nature10221. The results on the phosphorylation of JNK and Jun in mice injected with the murine CD95-specific agonistic antibody Jo2 under non-apoptotic conditions stand (both at the level of western blot and immunohistochemistry) and are not affected by the above changes; nor are any of the other figures. Further, the key findings of the Letter on the role of CD95 as a growth promoter in cell lines, in endometrioid, ovarian or liver cancer, and in liver regeneration are not affected by these corrections. For clarity, we now provide all the raw western blot data for the original figures and the corrected figures as Supplementary Information to this Corrigendum. A. Hadji and S. DeChant from Northwestern University generated data for the corrigenda. L.C. has declined to sign this Corrigendum.
It’s not clear why Lina Chen wouldn’t sign the Corrigendum, and we can’t seem to find Chen to ask. We did ask senior author Marcus E. Peter for comment, and will update with anything we learn.
In the meantime, if anyone can produce a flowchart or other graphic that explains what went right and wrong here, we’ll be happy to post it. The original paper has been cited 54 times, according to Thomson Scientific’s Web of Knowledge.
I am not a biochemist, by my working hypothesis concerning Western blots is that they usually do not look that great. To render them publishable, people started to alter them just a little bit. As the level of this alteration increased over time, crossing the threshold of forgery, the appearance of the blots in print improved. Once the quality of the altered blots became the expected standard, it became very difficult to meet this standard without resorting to fraud. The end result is that most retractions in biochemistry have something to do with Westerns. The “raw data could not be located” mantra is going to work providing that the blots are reproducible. In this case, the senior author seems to have outsourced the entire experiment just to cut through all the BS.
Producing perfect western blots doesn’t require any particularly high level of skill – completely green undergrads in our lab can usually produce usable data after only a few tries. There is no more excuse for image manipulation in western blots than with any other form of visible data representation. I think that western blots are taking as lot of unfair flak at the moment.
The original legend of Fig 4 begins:
Figure 4: Deletion of CD95 in the liver leads to a decrease in tumour formation caused by the reduced ability of hepatocytes to proliferate and to activate JNK.
In their own words, the new blots show:
Although there was a trend, the reduction of p-JNK and p-Jun in the livers of CD95-deficient mice in Fig. 4f was not statistically significant.
So how do they come to this conclusion?
The corrected figures do not alter the overall conclusions of this Letter, and all other data still stand.
Seems like the whole hypothesis related to JNK/Jun has been shattered by the new blots.
That last corrigenda is ridiculous. This paper is on it’s last legs.
I agree with chirality that the most likely explanation is some sort of image fraud in the westerns – we’ve had countless examples of this on RW.
However a moderately close inspection of the paper reveals nothing obviously incriminating e.g. image duplication. There’s a clear splice artefact in Supplementary Figure 10 suggesting cutting and pasting, which is poor practice but not in itself something that would force the authors to start correcting multiple other images. I would be good if others could look at this aspect.
So how did the matter come to light at all? One speculative explanation is that the corrections are the result of an internal investigation that, although it did not find misconduct, nevertheless concluded that certain images were erroneous. A potential trigger could have been a co-author or someone who tried to follow-up the work and saw severe problems in the way the experiments were conducted or described.
What is frustrating is that Nature just seems to act like a passive conduit for all this; they don’t seem to have any process to ensure that once a significant problem is identified, a thorough review of the whole paper is undertaken. It’s as if once its published, you could say that a dog ate all the original blots, and no one would care. There are so many examples of ridiculous corrections now in Nature that it’s hard to have any trust in anything the journal says post-publication. Some have been covered on RW but the list is actually much longer than the ‘megacorrections’ link.
It’s for these reasons that JBC are to be particularly applauded for having someone specific to deal with these issues.
Some journals are reluctant to agree to the fact there was a problem in the review process. Justifying their process is the only way to go forward…
Once Nature publishes an article, they really really really do not want to retract it. They would rather leave questionable science out there than hurt their brand. Of course, having a Nature paper retracted can be a death blow to a researcher, depending on their connections.
Is the paper JNK science?
I hate to beat a dead horse, but how about a quantitative imunoassay instead? An undergrad can develop a home brew ELISA for relative quantitation, and ELISAs are Photoshop-proof.
The way most labs run them, ECL Western blots are at best quasi-quantitative, and more often are effectively qualitative. Li-Cor data can be better, but how many properly run Li-Cor blots (with standard curves and other controls) have you seen published? IMO, Westerns are so antiquated that they barely warrant inclusion in the Supplemental Materials section.
I suspect that this addiction to Western blots is here to stay because most academic labs are either ignorant, scared, lazy, or deluded about the cost of properly working up a plate- or bead-based immunoassay. It’s amusing to me how many labs will gladly spend $250 a pop for IP/Western antibodies, then waste weeks running gels, when they could run 60 samples in an afternoon with an ELISA at about $0.60 per well.
It’s the 21st century. BAN THE BLOT!
Don’t be silly. Antibody specificity isn’t good enough to switch entirety to ELISAs. Most antibodies don’t detect a single protein and as such, and ELISA will give you total protein detected, not specific protein detected. Western blots help with this problem by adding a separate layer of specificity – the molecular weight of the protein detected. The problem is not with the technique – the problem is with the interpretation and/or manipulation of the technique.
I hate to beat a dead horse, but how about a quantitative imunoassay instead?
Is that a joke? Why do you need a quantitative assay when a semi-quantitative assay does the job perfectly? WB are not and never were quantitative; by nature they are semi-quantitative so I don’t understand what point you are trying to make or why you are making it. The quantitative (or not) nature of WBs has little to do with why WB misuse is detected more frequently.
An undergrad can develop a home brew ELISA for relative quantitation…
Really? An undergrad can design and manufacture and ELISA-grade antibody and then develop and standardize the ELISA assay itself? I don’t think so. But the better question is: why would you bother when WBs work perfectly? I wouldn’t want to be your student. Don’t make the common RW mistake of assuming the WB is fundamentally flawed because some people abuse them. It is not. Move on.
ELISAs are Photoshop-proof. …
And you can’t “edit” the data in Excel at all, right? Ah yes, the solution to the “WB problem” is to hide ones data in and Excel spreadsheet. [face palming right now]. Laughable comment.
I have to agree with these replies.
Even for well-validated monoclonal antibodies, with known specificities, the likelihood of off-target binding and interaction can be nuts. The *point* of Western blot is to get around those specificity issues.
ELISA can most certainly be useful, and it would be to many investigators’ benefit to consider using it instead – but as a quantitative supplement. Western blot will answer questions of antibody specificity that ELISA (and other similar sorbent assays) simply cannot.
In the end, even the most well-characterized ELISA antibodies are still validated by Western.
Call me old style, but I cannot understand how a scientific paper can stand when you have to correct almost half the initial figures.
In my own, and admittedly closed mind, if you a problem with any figure, the house of card has already crumbled. THe peer review process is built on absolute trust. We trust what our colleagues tell us and their figures/data. Once that trust is broken once, it will be very hard for me to trust the same people again.
It is bad science and we all know it and if the scientific community does not stand up to that, I am sure other people will be hot on our heals soon.
Agreed. If corrections to most of the figures have no effect on the conclusions, why were those figures in the paper to start with? Figure 4g must be entirely irrelevant, since neither the correction nor the correction to the correction of this figure changes the conclusions. And if the researchers made detectable mistakes in their figures, I expect that they made mistakes elsewhere in their procedures and in their data collection. I get that the publication means a lot to them. They should have thought about that back when they were doing the research and preparing the manuscript.
Very interesting story and comments. I suspect AMW’s remarks above are very close to approaching the truth of what happened.
On another note, does anyone know what happened to the “abnormal science” blog? It was deleted. http://abnormalscienceblog.wordpress.com
It had extensive postings about the Aggarwal debacle at MD Anderson. I wonder if outside forces prevailed in shutting it down? By my count, only RW and science-fraud.org remain as publicly-accessble watchdogs on scientific miscreancy.
This paper never made sense in the first place; it makes if even less sense now.