A Nature Cell Biology article on insulin regulation has been retracted after scientists noted that the antibodies used in their research were not as specific as they had previously believed.
The notice is clear on the problems with the science, which together “call into question the main conclusions of the paper.” Three of the paper’s five authors were employed at Novartis at the time of publication.
Here’s the notice for “Wolfram syndrome 1 and adenylyl cyclase 8 interact at the plasma membrane to regulate insulin production and secretion”:
Our Letter reported that WFS1 modulates insulin biosynthesis and secretion due to an interaction between WFS1 and AC8. It has recently become apparent that the anti-AC8 antibody, as well as some other key antibodies employed in the Letter are non-specific. In addition, we noted increased levels of WFS1 protein and the appearance of what may be post-translational modifications after incubation with high glucose (Fig. 5a,b and Supplementary Figs S4,S5). These two observations could impact protein–protein interactions by means other than altered trafficking, as the paper suggests. These concerns call into question the main conclusions of the paper. While we maintain that WFS1 does play a role in insulin biosynthesis and secretion, we cannot conclude that this is due to a WFS1–AC8 interaction. For these reasons, we think that the most responsible action is to retract the paper. We apologise to the scientific community for any confusion this publication may have caused.
The paper, which has been cited 23 times, according to Thomson Scientific’s Web of Knowledge, was a big enough deal to get a “News and Views” in Nature Cell Biology. That article now links to an (oddly, partially paywalled) note:
We wish to alert readers to the fact that the Letter (S. G. Fonseca et al. Nat. Cell Biol. 14, 1105–1112; 2012) on which this News & Views was based, has been retracted. The comments in this News and Views that rely on the accuracy and reproducibility of this data should therefore be reconsidered.
We’ve emailed the authors and editor, and will update with anything we hear back.
Maybe I am too cynical and mistrustful, but did this “blaming everything on antibodies” become a new excuse? Immunodetection is a rather established technology. While it can happen to everyone to follow the wrong lead initially, how can one seriously assemble an entire NCB paper after years of experiments based on several (!!!) faulty antibodies, without noticing the problem?
No, antibodies are being sold without proper controls. It happens a lot more that we would like to recognize. Polyclonal made in rabbits are variables like the animal used to produce them.
In my experience, research conferences are great places to know which antibodies/companies are making great or horrible quality control.
Error happens when we trust to the just expected results, before checking how good are the antibody. http://www.proteinatlas.org/ it is a great resource to keep in mind.
Even better, check out http://www.antibodypedia.com/ . Expect at least half of commercial antibodies to fail, as most of them are retailed by aggregators that are not responsible for manufacturing or QC of any kind. Lot to lot variability is another killer that has ruined many projects.
And please, share your own experiences with non-specific, as well as specific, antibodies with your fellow-scientists. We have started the independent antibody review site http://www.pAbmAbs.com. This site communicates antibody reviews and validations provided by scientists to the research community aiming to limit the costs and time spend in search for good research antibodies. We invite everybody to contribute with reviews.
Julius, it is not the manufacturers’ fault the paper was retracted. Everyone who ever did a western blot or immunostaining knows the issue of unspecific bands and signals. Everyone with biolab experience knows stories to tell. Exactly because of these decades-spanning experiences, it is the duty of the scientists to make sure they were not looking at an artefact or unspecific signal, before submitting the paper. Classically, through knock-out and siRNA negative controls, but there are plenty of other options.
Blaming the supplier is a pretty lame excuse in my book. Knockdown/knockout technology has come a long way, so when specific immunoreactivity is a large focus of the work, and the submission is going to have a high profile in the press, one had better be prepared to spend some time and money assessing the selectivity and specificity of critical reagents.
I’m r surprised that these folks were at Novartis at the time the work was done. This kind of sloppiness isn’t tolerated much anymore in Pharma, it’s too expensive.
Excellent point about Novartis (whose people already had to retract other papers http://retractionwatch.com/?s=novartis); since this is what industry research is supposed to be strong in: routine applications and quality controls.
Following a wrong lead because of unspecific antibodies is indeed a big problem in biology. Also, researchers can spend months looking for the proper antibody for a particular application.
We share all our data on research antibodies – positive and negative – on the website http://www.pabmabs.com
Something weird is going on with this paper. In Fig 4H the bands for WFS1 and AC8 do not match particularly well with any bands provided in the the scan of the whole blots as provided in the supplementary information. Interestingly, lane 3 and 4 of CaM can be found in the scan in the supplementary information, however lane 1 and 2 which are barely visible in 4H are pretty strong in the scan of the supplementary information. There are a lot more, e.g. Fig 5A some of the bands for GPR94 can’t be found in the provided scan of GPR94 in the Supplementary information. It’s freely available through pubmed so please have a look. I believe there is more to this retraction notice than a antibody problem.
Excellent contribution, you did a great job! Please share your findings on PubPeer and PubMed Commons!
Otherwise, “non-specific antibodies” will soon become a common euphemism for wonky or manipulated data in retraction notices and corrigenda.
On first sight, I would agree. Fig4h does look strange, compared to the full blots. The WFS1 bands look different on the full blot, and with AC8 there is really not much of a band on the full blot at the indicated 160 kDa at all….
Looking at the full scans you really don’t have to be a molecular biology mastermind to realise that the AC8 polyclonal Ab is not very specific. “Blame it on the Ab” is a pretty lame excuse here. What were the reviewers doing? They certainly did not look at the full blot scans. I always applauded NCB for asking for the full blots, but they are useless unless someone also takes a look at them…..
Also, in 4j the AC8 band is supposed to be 160kDa, but the 148 kDa marker on the full blot actually runs above that band. Now, sizes vary depending on conditions, size markers etc. And on the Santa Cruz data sheet accompanying this “fomidable” Ab (sc-20764) I would also think the band they show on their blot is below 160-165 kDa, the size reported in the literature. But I still think it is misleading to indicate a size that the actually used size indicator does not support. That makes it a “claimed size” as reported in the literature. This is less of an issue if you have a very specific Ab, but their blots really showed them that this is not really the case here. In my humble oppinion, it should have become apparent to the authors not recently but during their initial experiments that the Ab is not specific (as a response to their retraction note…).
In my old lab nobody was a big fan of commercial polyclonal Abs. Usually, own production polyclonal Ab, rigorously affinity purified, worked better than a lot of the commercial stuff. But it’s a lot of work, time and money you have to spend.
“They certainly did not look at the full blot scans. I always applauded NCB for asking for the full blots, but they are useless unless someone also takes a look at them”
I’m curious about that statement, since I was under the impression that supplementary materials were not considered in peer reviews. I’ve started incorporating full blots as supplements to submissions where the main figure shows only a slice of a blot, but have never heard a peep about them from a reviewer or reader, even when I inadvertently formatted them wrong prior to publication (a mistake I caught in proof).
It seems to me that unless the blot image has a certain format and includes specific data, it is of limited value to include an image of the fuull blot anywhere. Are there sgenerally acceepted standards for this type of supplemental data, or is it enough to provide someone with the primary data file if asked?