The title of this post is the headline of our latest column for LabTimes. It’s inspired by a number of animated discussions on Retraction Watch following our coverage of various Western blot problems — some unintentional, and some, well, less so.
Take, for example, this comment, which we quote in the column:
Western blots have become a joke in published papers due to the failure of researchers to perform proper controls/replicates and the failure of reviewers to demand this data.
As we note in the column (links added):
Pictures of the gels that result might be highly valuable for researchers trying to demonstrate an effect but they also seem to be the method of choice for image manipulators trying to make their results look better than they really are.
Such manipulation, which often involves using the same controls multiple times, has brought down more than 30 papers by cancer researcher Naoki Mori, left the career of resveratrol scientist Dipak Das in serious question, and forced the resignation of cardiology researcher Zhiguo Wang. And those are just a few of the most visible cases.
We offer some suggestions for improvement in the piece, which you can read in its entirety here. As always, we look forward to feedback from Retraction Watch readers.
I tend to hate the use of Western blots in papers because they are chopped to bits. The standard operating procedure is to take as narrow a slice as possible of the band of interest and the control. The control may or may not be from the same gel. Who knows. If space is the issue, then do that in the paper. But I’m personally a proponent of *requiring* the full gel image be available in the supplementary material if no where else. If we require array data to be deposited in a public source, why don’t we require unmodified gels be available as an online supplement?
I agree with Eli– there is no reason not to require scans of the full gel image to be sent along with the paper and to be attached as supplementary material. If anything, that is what supplementary material should be used for and not as the methods section as so many journal seem to be doing.
I think part of the problem with the prevalent manipulation of Western blots might also be from the antibody companies themselves. I feel as if there are many companies that either outright lie about their antibody or at do not test their antibody properly– we wind up having to go through several antibodies until we find one that actually works on our preparation, especially for doing IHC but also sometimes with Western blots. Our laboratory has a couple vendors we will never buy antibodies from because we have been burned by them too many times. We have had several instances where we do a final check of the antibody in a knockout mouse for the protein in question and the band is still present. I can imagine scenarios where a researcher frustrated with an antibody that has been advertised to work not working taking the next step of cleaning up the gel to make it look the way it was advertised to, especially under time and budget pressure.
There is a nice paper out there about systematic validation of antibodies BEFORE using them not in Westerns obviously – who cares, right? – but in microscopy. This involves knock-down with siRNAs and independent verification that the knockdown worked. Interestingly enough, about 20% of antibodies (to various human proteins) analyzed were not specific. The authors advocate that this should be the INITIAL validation, not the FINAL one, like in your case. There should be pressure put on the companies, peddling antibodies, to do that BEFORE they put their stuff on the market. But people using antibodies should take their own steps to ensure the specificity of the antibodies they bought. Being deceived by an unscrupulous vendor is not an excuse for data manipulation.
Just in case, the paper J Proteomics. 2012 Apr 3;75(7):2236-51.
Reagents are constantly validated in industry and almost never validated in academia. Academic scientists are shocked when they here about the variablility that exists in reagents that they take for granted as being the same one order to the next.
Of all reagents that scientists rely upon, antibodies are notoriously poorly validated by manufacturers (so are small molecule drugs, but that is a different discussion). The reason you don’t see full gels in papers is because there are usually dozens of background bands. Ask the researcher how they know which band is the one of interest and most, when being 100% honest, have to admit they go by approximating size based on mobility. Validating reagents is hard, costly, and takes time. Without it, though, you have to ask if you should bother even doing the experiment.
@MM – If the reagents were being constantly validated by the industry then we would not have so many antibodies that either do not work or work only when the protein is overexpressed.
@ Pymoladdict – microscopy is just as prone to spurious results due to non-specificity as are WBs. Can siRNA technique help in discriminating if the Ab in question is recognizing only the post-translationally modified protein or the unmodified one too? People use microscopy to show these effects too …
It is true that non specific interactions exist and it will be technically impossible to eliminate them completely. Increasing the stringency can only reduce the non-specific interactions. Just as by over expression you can get protein interactions even if they do not exist at physiological levels!
@WB Sorry, I wasn’t clear. I’m not talking about the ab manufacturing industry. I’m talking about those in industry who are end users just like you. FDA requires validation of reagents. Once you start doing this you understand why.
“We have had several instances where we do a final check of the antibody in a knockout mouse for the protein in question and the band is still present”
I agree. The situation is out if control.
In one recent paper, rather than conceding that their Ab is not specific, researchers from Johns Hopkins interpreted a band of the expected size, still present in a complete knockout model, as an unknown isoform of their gene
The same Ab has now been used in papers published in Nature, Cell etc
This is another example of the failure of the peer review system to recognize problems with the inappropriate use of Abs and related assays
Interesting. The antibody you bought from a commercial source doesn’t perform as advertised. Could be extremely frustrating, especially when you’re under pressure to finish the study and you think you already know what the results should be. That could lead to creative treatment of your Western Blot with Photoshop or a pair of scissors. Fascinating inside information about what a mess this supposedly tried and true method really is.
Who said it was true? Tried, yes, often without giving it a first, let alone second thought. But why do you call it an inside information? Anyone who ever used commercial antibodies in Western, or IP, is aware of the problem firsthand, or at least second (from colleagues bitching about it). Have you ever bought anything that didn’t work as advertised? So what’s the difference?
Wait a minute, you mean that comment wasn’t factual? I mean, about not having the right antibody because you double-checked prior to using it and discovered it wasn’t functioning as advertised?
Are Western blots a problem (a mess) or not? Remember, you’re talking to the monkey here.
“Fascinating inside information about what a mess this supposedly tried and true method really is.”
This is precisely why they should retract or correct their article. You have the wrong idea about the technique and I don’t blame you. If I had never done the technique before, I would think the same thing as you after reading the article.
I disagree with your statement (in the Labnews piece) that eliminating Western blotting from scientific practice is unwise and unrealistic. The fact the many a folk are used to this technique is no better defense than any other retrograde rhetoric (take your pick). I am happy to see Northern blots disappearing and being replaced by more reliable, quantitative methods, and I’d like to see Western blotting to follow its Northern cousine into oblivion.
The problems with this technique are not limited to the noted practice of skipping on controls and replicates. It’s by definition a low complexity data – a band, that lends itself to fraud. Compare it let’s say to raw masspec data, and try to fake the latter with a Photoshop.
Also by definition it is a selective presentation data – and not due to cutting and pasting of tiny gel strips, but because Western blotting shows you what you ask for, a presence of epitope of your choosing (by adding a given antibody), not reporting on what’s there.
By practice is it usually a blind technique: most people using it just buy commercial antibodies, trusting whatever the provider claims about their specificity, and never trying to independently assess it. Why bother, since there is always Phtoshop…
“I am happy to see Northern blots disappearing and being replaced by more reliable, quantitative methods”
Please tell me that you are NOT suggesting that qRT-PCR is more reliable than Northern blots? qRT-PCR is the most readily abused technique in molecular biology in my opinion.
Well, I wouldn’t necessarily call it abused, as used to show smth. that isn’t there. It’s true that a lot of users don’t follow even the basic rules that are needed to get reliable data. But that goes for just about anything, anywhere. In our case we did quite a bit of validation, including producing target RNAs in vitro, and using them to calibrate the PCR, and control the reactions (by adding a known amount of target to the probe). So when it is done right, it is indeed more reliable and quantitative than a Northern. Some people I know use Invader assays, very nice, but expensive.
@ Pymoladdict – if it is “used to show smth. that isn’t there” then IMO it does constitute abuse of the technique and just as much of a fraud as manipulating WB image. You acknowledge that “a lot of users don’t follow even the basic rules that are needed to get reliable data” yet we are expected to rely on that data as it is not ‘visible’. It is the same as people not following the basic rules for getting reliable information from WB or trying to twist the data to fit into their story by manipulating the images. So why blame this or for that matter misuse of any other technique?
Agree with Dave .. how many investigators actually verify the product of qRT primers by sequencing to make sure that their primers are amplifying the correct sequence? How many do even look at the dissociation curve let alone run the product on the gel to atleast validate the size?
qRT at best is a relative technique and even now for absolute quantification people do rely on northern blot. It is the relative ease of performing qRT without having to worry about RNA degradation that northern blot has been replaced by qRT.
I agree that qRT is “at best” a relative technique, but in my opinion that’s a pretty good level of “at best” for many, if not most, biomedical applications. But how many investigators verify the product of qRT primers by sequencing? Well, I always do, every time – I don’t even depend upon restriction digest pattern, although we derive that too. We also derive a dissociation curve not simply one time for each primer pair but *every time* we run the qRT. Even for the most reliable, verified primer pair I’d estimate that 5-10% of the time some non-specific product (i.e., garbage) from the PCR reaction comes along for the ride, especially for low copy number transcripts. That’s a good reason for targeting amplified targets > 100 bp, such that on a gel the distinction between product and garbage can be readily apparent to even the most uncritical eye…assuming, of course, that the gel image isn’t “corrected” by the modern miracle of digital cropping.
To this point I had assumed, obviously naively and perhaps stupidly, that everyone using qRT verified the amplified product identity and routinely derived dissociation curves. To not do so is beyond lazy – it pretty much guarantees that the analysis will be hopelessly compromised.
Why would people perform all of that work when, with just a modest amount of additional effort and time, they could avoid creating garbage? Why bother with all of the effort just to cheat at the end? Why bother being a research scientist at all, as considerably more wealth can be generated by cheating in business or politics? Why Why Why? I suppose that’s the great, unanswerable question. How depressing.
Although qRT – if done properly – could supplement and support some of the information provided by WB regarding protein levels, there are many instances that it would be irrelevant.
For example, how do you propose to quantify phospho-protein levels? I can see some people suggesting MS or LC-MS. However, that normally requires overexpressed proteins (non-physiological in my opinion) and expensive/dedicated equipment. In addition we can start discussing about the resolution of modern MS instruments and also how to reliably identify peptide fragments in cellular lysates.
qRT-PCR can be made to quantitate target, and Northern doesn’t _measure_ absolute quantities by itself. Do these “people” calibrate their Northerns with target RNAs? I figure if they were that interested in rigorous analysis, they wouldn’t be doing Northerns in the first place.
@ farcofcolombia – there is never anything wrong or stupid about doing the experiments in the most rigorous and controlled way and I too rely on qRT for my work but like any other technique it too has its limitations regarding the nature of information that you can get out of it and also like any other technique just as vulnerable to manipulations by unscrupulous individuals. I too thought that people do all the checks but after using some of the published primers, I have found that they didn’t but these are the kind of things that cannot be caught by seeing a paper and therefore these techniques are not going to be blamed for ‘bringing down the papers’
Why people resort to manipulations? For Publications. Why – grad student needs it for graduating, post docs need it to find placement, junior profs needs them to get tenure and funding, senior profs need them for funding/ their reputation by the number of publications … from grad students to department chairs have been found guilty of such manipulations. Either they never had the right reason for picking up the research career or forgot it on the way
To MM: you aren’t from the industry, are you? Seem a bit too swiping a generalization, in regard to both, the academia and the industry. Ever tried running out commercial preps of enzymes for PCR (e.g. Taq, or Pfu) on an SDS-PAGE? I send a lot of people a lot of enzymes (not commercially available), and I’d be ashamed to hand out pro bono the kind of crap some companies charge money for. How’s that for validation?
Whether we like it or not, a paper without a blot is harder to get into any molecular biology journals. This is a personal experience. Most of our reviewers are used to look at the bands/blots rather than any other means of qualitative assessment. It is a shame that if the above suspicions are true. Cost is a real issue of repeating the same western to confirm or validate.
Can we trust flow cytometry where axes can be manipulated and delineation lines drawn to maximize an effect?
Can we trust bar graphs where data points can be “thrown out” or results trimmed to better significance?
Can we trust IHC when there’s no internal positive or negative control on that piece of tissue?
Can we trust microarray experiments when the significance cutoff is often arbitrary?
Can we trust mass spectrometry when all the peptides are not available for all to view?
Every method has issues. Its up to the scientist to make the controls convincing and the result clear and reproducible. Additionally, anybody who trusts an antibody company without verifying the abs is a bad scientist. Like the NRA says, “its not western blots that lead to fraud, its the scientists performing western blots that lead to fraud.”
Thank-god someone else on here has some sense. I completely agree.
@Physician scientist: most of the techniques you mentioned are modern day techniques. Since when we started to doubt the techniques? Since when the frequency of scientific misconduct (data manipulation) started to increase? Since when we noticed that experiments are difficult to duplicate or reproduce? Does anyone have the data to show that the irregularties and data manipulation were significantly less in pre-molecular biology (or the above techniques first used by researchers)? Most of the earlier experiments used classical qualitative methods – which scientists could repeat and reproduce (at least in majority of the cases). Such an analysis could be useful before any conclusions can be made.
Any one of you talking about mass spec ever heard of MIAPE? Repositories for mass spec data? Give me an example of an industry-wide standard (like MIAPE or MAQC) in reporting of Western blots. You can’t. A repository for Western blot data, whatever that might be? Also can’t. Before demanding from Ivan a comparison of Westerns to modern -omics, get on with a program a little. Some sense is not a substitute for knowledge.
I agree completly – all methods can be faked. WBs are very useful, if done properly. They are relatively cheap and you can probe for post-translational modifications on the same gel. Actually an expert can fairly easily detect when loading controls have been manipulated (when comparing total and phospho bands e.g. and they have different shape). Those who say that western blots should be banned most probably have never done one in their lives.
Antibody specificitry is a problem but there are alot of very good antibodies for the major signalling pathways that have been well validated – of course you need to know which antibody to buy but this comes with experience and trial and error. Problems do occur when you are probing for less established proteins. My 2cent
Actually, mon ami, some concerned citizens have been asking the same questions, and not in a purely rhetorical way. That’s why there are guidelines for reporting mass spectrometry data, MIAPE, that are being constantly updated and revised. Do they make mass spec data idiot- and fraud-proof? Not yet. But in a time, much shorter than Westerns were around, proteomics mass spectrometry came a much longer way than Westerns users, who still can’t even start thinking about caveats of their favorite method.
BTW, microarray reporting guidelines are also in development, such as MAQS.
Both methods have high complexity raw data, that can be re-searched and evaluated by a number of computational tools. What’s the raw data for a Westhern – piece of film, or very rarely, a file from a Phosphoimager?
So instead of physical evidence (a piece of film), you prefer a huge amount of raw data (a computer file) that need speciallised computer-assisted manipulation(s) to make sense of. On top of that, these manipulations are often done by people who have not clue of what they are doing.
To make matters worse a fraudster could manually alter a data set. This is very difficult and time consuming to detect, unless you know what to look for. Too far fetched? Well read that Anil Potti saga in this blog.
Interesting theme, but I think to blame the technique rather than the human using it is a fundamental mistake; a bad workman blames his tools. All scientific techniques, including MS and ‘omics’ technologies, are prone to issues of imperfect sensitivity and specificity, and it is the job of professional scientists to deal with this and present reality rather than a misleading picture that hides the real issues.
This is much more likely to be an issue of ascertainment. The reason that certain scientific techniques are now under the spotlight (e.g. histology, also prominent in recent Retraction Watch cases) is because with complex images it is relatively easy to see artefacts that indicate fraud, and (more importantly) very difficult for journals and institutions to turn a blind eye once these are pointed out. I suggest that Western blot fraud cases are coming to light not because the technique is intrinsically flawed but because fraud is much harder to hide. From what I have read most fraud cases in this technique were spotted by independent readers able to point out the problem with relatively little risk to their own careers. This is in strong contrast to other fraud cases which are 90% triggered by internal whistleblowers, all of whom suffer the consequences of creating trouble within their institutions. And remember that there what we see is just the tip of the iceberg…
I am pretty sure that humans can subvert ANY technique in order to produce misleading conclusions. It’s just that often we can’t prove it, because we can’t see the raw data. I suggest Western blots have if anything been made MORE reliable by the recent issues, because those considering engaging in image manipulation now know the consequences very well. We have to be careful to make sure don’t get this all back to front.
Just missed the post by Physician Scientist as I was writing mine – I agree 100%.
Data trimming / suppression are very easy to get away with as they are undetectable; only insiders know they have taken place. And even when such cases are brought to light, it is intrinsically much harder to prove fraud compared to the case where journals and institutions are faced with a gel image that has clearly been flipped / duplicated / twisted.
You have got to be kidding me!!
Adam and Marcus, I am honestly amazed by the tone that you used in your article and especially by your overly-sensationalized title. It’s ridiculous. The article comes across as though it was written in precisely 30 seconds. You got way, way, way too excited with this one in my opinion. The topic is valid, yes, but to essentially trash a technique that is the cornerstone of many research labs is just wholly unjustified.
Why in the world did you not present the other side of the coin more equally? Many of us have defended WBs on here and I didn’t notice any of those quotes in your article. What about other techniques? qRT-PCR is probably abused much more frequently than WB, but that gets a pass because there are no film images? No easy way to detect it. What about IHC or Northern blots? What about RNase protection assays for that matter?
Your article is unbalanced, grossly unfair and, with the greatest of respect, very naive.
Now, the technique is the gold-standard for SEMI-quantification of a target protein in a homogeneous sample, period (I don’t think you said that in your article, either). Can it be abused? Yes. Do people abuse it? Yes. Can other techniques be abused just as easily and without detection? Yes. Does that happen too? Yes. Will your suggestions solve the “problem”? No. Here is why:
1) Do you think asking people to show the whole film, ladder lanes etc will stop it? I can produce a beautiful WB of a single band and make it any size I want. I can manipulate the ladder, change the load, anything. I can even fabricate controls if I want to. If you know what you are doing and are significantly motivated, your suggestions will not change a thing.
2) If my film happens to show some lane-specific background, but the highlighted band is correct, is that OK? Or do I have to have a completely clean blot with zero background for it to be acceptable?
3) If so, do you have any idea how many antibodies produce such clean bands? Very, very few is the answer. I can name one supplier – Cell Signaling Technology – that produces antibodies of this quality consistently. Apart from them, it is pure luck whether you find a good antibody or not.
4) Will you require the massive antibody industry to better validate their products and refund the purchase of poorly performing antibodies? Good luck with that.
5) What is an acceptable control to you? Do I have to use in vitro TnT to produce recombinant protein and run that alongside every time? Do I have to use KO tissue? What if there is no KO? Do I have to do siRNA in cells? What if I am a clinical researcher with no cell culture facilities?
6) No journal will ever uphold any of your suggestions. There is no way Nature upholds to their “standards” they set for themselves. Don’t believe me? Go and look at a recent Nature paper with WB (drawn in size markers don’t count, remember). You should have probably pointed that out in your article by the way.
7) Have you ever reviewed a paper and questioned the WB data? Well I have and believe me when I say that editors have NO interest in complaints of a purely technical nature. If they like the paper, it will get accepted regardless.
8) Ah, one last one and this is a biggie. How do you propose we measure specific protein levels instead?
I think you should seriously consider retracting your article and writing a more balanced one in its place. I have no problem with the debate as it is an important one, but lets give the WB a chance to defend itself.
I agree 100%. I think this is an article for retraction watch :).
As for point 7) It has happened to me too
I don’t agree that the original paper should be retracted – I think it was purposefully provocative. But I think Dave et al. raise some excellent points. (And I HATE Westerns!!)
I think at the very least Adam and Ivan should publish a more balanced and fair assessment of the technique immediately and without delay. This should include comparisons to all similar techniques, including modern “-omics” approaches. If they cannot do that, then they should not have written the piece to begin with. They should also clearly state that they have no first hand experience of the technique themselves as this is very important for the reader when determining how much weight to give the article (which is none in my case). It is the equivalent of me writing a piece about poor journalism, which this is, when I have no experience to draw on. It makes no sense and is misleading.
Make no mistake, my preference is for them to retract it immediately and REPLACE it with a better piece. They should also formally apologize for doing such a crappy job, shamelessly promoting their website and stirring up needless and confusing rhetoric.
Thanks for the spirited discussion on our LabTimes column, which just what we had in mind when we said we looked forward to Retraction Watch readers’ thoughts.
We’d like to respond to a few specific criticisms:
1) It’s not clear to us where we “trash” the technique. We were raising questions. That’s what the scientific method is about. Solid techniques will prove themselves. In fact, we write of Westerns:
2) A detailed comparison to every alternative would certainly be useful, and we’re happy to see that discussion continuing here, but we chose to write about Westerns because they have come up so often on Retraction Watch. Requiring that every criticism of a particular technique mention every criticism of every other technique is really only useful as a way to forestall discussion.
3) There is a suggestion that we are letting the workman blame his tools. But we’re not. Here’s how the column ends:
4) The idea that our solutions are imperfect is a valid one, but it hardly seems like a reason not to push for them. If journals aren’t living up to their own requirements, then let’s hold their feet to the fire. We do that every day at Retraction Watch.
Dave asks whether we have any experience with Western blots. As a matter of fact, yes, Ivan ran them in labs starting the summer he went to college. But we don’t agree that it’s necessary to have performed a technique in order to report on it, gather ideas and facts from our readers with far more experience — which after all is what this column did. Scientists should welcome criticism so they can do their jobs better.
We look forward to continued feedback.
@ Ivan – #4 “If journals aren’t living up to their own requirements, then let’s hold their feet to the fire.” Well that holds true for the other papers too that they publish. Then why do we want to discuss the retractions and other issues with published paper and journal policies? Why don’t we let them do what they want and ‘hold their feet to the fire’?
@Dave. I’m still looking for the part where they trash the technique. I’ve read all your posts and agree with you overall about your comments about the technique. I actually think your points and the point of the article overlap.
“They should also formally apologize for doing such a crappy job, shamelessly promoting their website and stirring up needless and confusing rhetoric.”
Have you read the Begley and Ellis piece “Drug development: Raise standards for preclinical cancer research” (Nature Volume:483,Pages: 531–533)? I’d hardly call any discussion about this issue needless.
I’m with Dave. There’s no balance at all to this article. Western blots are actually among the hardest things to fake because (without image manipulation/nonlinear enhancement) its one of the few techniques where you are looking at actual data. This is why the retraction rate is higher.
You are not seeing retractions in microscopy (which is processed through a computer – the “colors” aren’t real people- and which shows “representative cells”) because this is much harder to pick up. The same is true with any graph – you are not seeing the actual data….you are seeing the scientist’s rendering of the data. How do you prove the qRT-PCR is bad based on a graph? How do you prove the luciferase numbers haven’t been manipulated to show a greater effect? This is why western blot papers seem to be retracted more – you are looking at hopefully the actual film.
I’d like to see Adam and Marcus address this point in their correction to the above article.
Thanks Dave, Physician scientist, AMW for clarifying some of the misconceptions that the article has created.
The reason that western blot data has brought down several papers is because when you try to manipulate it, it can be readily caught while when you manipulate the bar/ line graphs they cannot be caught unless some one tries to reproduce the work .. and that explains why when a drug company decided to validate the targets reported by several researchers they were unable to do so.
The issues associated with non-specific binding of the antibody affects not only the western blot data but every technique that relies on antibodies for detection – IP, IHC, ELISA, FACS, IF, ChIP to name some … in western it is easier to detect the non-specific bands due to separation on the basis of molecular weight but for the other techniques esp for ChIP which is being so routinely used to define the association of different regions of DNA with certain proteins or epigenetic modifications, the non specific component becomes difficult to separate from the specific part.
The users are (mostly) aware of the right way to do the things but if they do not follow the right way then, they are the ones who would do so with any technique. Loading control should come from the same gel, the treated and untreated gps should be run on the same blot so they are processed and exposed exactly the same way.
Science functions on the basis of trust and every researcher cannot be and should not be expected to validate every single antibody, every batch of antibody (polyclonals have batch variability) before using it for their experiments. It is not possible for every researcher to have the KO lines for every single protein they look at, as suggested by one using siRNA technique again relies on use of WB to validate the efficiency of KD, qRT is not the way out as several aspects do not depend upon transcription (e.g. post translational modifications) and in case of siRNA in several instances decrease in transcripts is not observed, non-specific targeting by siRNA still remains a concern. Yes it is true that there are companies whose antibodies cannot be trusted and users are forced to go through a series before getting the right one or else rely on information published data for the antibodies that are likely to work.
The problem is not with the technique. It is the users who try to abuse the technique and they would do the same with any technique … WB is the visible aspect.
“Science functions on the basis of trust and every researcher cannot be and should not be expected to validate every single antibody, every batch of antibody (polyclonals have batch variability) before using it for their experiments.”
Yes they can be expected to and they should be expected to validate every batch of antibody. Your reason why they shouldn’t be expected to do so is because it is hard? Your answer is that we should “trust” that the antibodies are doing what the researcher says they are doing? Well, science is hard. Trust but verify.
As for the comment on Nature guidelines in the Lab times article, please pick up a couple of Nature issues and look at the figures in the main paper as well as the supplemental, and then make a comment on what Nature preaches in instructions to authors and what it practices.
How many articles does one find with “(Nature asks its authors to include) “positive and negative controls, as well as molecular size markers” on each gel and blot, “either in the main figure or an expanded data supplementary figure”.”the supplementary information should include full-length gels and blots wherever
possible”.
“It’s a poor workman who blames his tools.” Monkey understands that. In fact, there’s Protocol Monkey if one is unsure of the next step in the procedure.
Western blots don’t bring down papers, people bring down papers. Most, (but not all) methods can be manipulated if the experimenter wants to. Western blots work if the researcher is genuinely interested in falsifying his/her hypothesis, they are malleable if the researcher isn’t. But that is true of so many methods. Perhaps a western blot is more able to allow the researcher in keep themselves in denial – but only if there is a pre-existing willingness to be in denial.
Agree with the comments about commerical antibodies. I have had multiple companies offer antibodies for a number of proteins, none of which worked. In end I had to make some stable transfectants of my target protein just to convince myself the antibodies were duds.
It wasn’t the beautiful strong single band on the company website product page that particularly annoyed me, it was the “anonymous customer” feedback displayed underneath – great, this works just fine, thanks – that particularly annoyed me
Did anybody get the densiometriy scan value of the Western blot protein band corresponding the nature of the band observed through naked eye ?
I very much appreciate the update from Ivan. There is a lot of food for thought here, and I like the comparison of the Western Blot to the NRA slogan… Westerns don’t cause cheaters, cheaters use them (and many other techniques) to perform shady operations. I am lucky that I no longer need to publish to keep my job… but that is not the case for many junior scientists, and they feel the pull to cheat to keep their jobs or to advance in their careers. If it weren’t Westerns, it could be anything else. Of course, as pointed out elsewhere, we’ve also seen fraud in the labs of more senior folks. These scientists don’t need to publish to maintain their livelihoods – they choose to publish because being famous is more important than finding out the ‘truth’ (whatever that may be.) I hate seeing Retraction Watch piece after piece, and knowing it is fraud undone by poor manipulation of Westerns, but there are so many ways to cheat, it is hard to think of a ‘built in check’ for every one.
For all, especially Pymoladdict, Dave, and WB: how about a step by step alternative to the Western Blot?
Pymoladdict, at least, I think, is saying that we can do better with alternative techniques… so, tell me how, in general, you get the same (or better) information from other, newer, procedures? It sounds to me as if there is a road to replace the Western Blot but it is not as clearly defined as I would like (I mean, from the monkey (and the littlegreyrabbit) point of view.)
It is becoming clear to me that fooling with a WB is easier to detect than messing around with other techniques, which is a good thing; but, to be honest, it’s a messy technique, isn’t it? I mean, you put one antibody on top of another, and cross your fingers (or spend a lot of time double checking, then reordering from another company or making it yourself) that the third antibody performs as well as the first.
A “simple example” would be when you do a synthesis; every step, even if it’s 95% yield, cuts in to the final yield some more, and after the fifth step, you’re down to a 5% yield. At Protocol Monkey, I counted three separate, consecutive antibodies that have to be used to obtain the chemiluminescence (or autoradiography) picture. Don’t all those steps cut in to the final yield significantly?
Just skip over that part if it sounds stupid or ignorant to you… just stick to the question (there are no stupid questions, right?) of what you can do, specifically, that will work better than WB.
77% overall yield for a 5-step synthesis with 95% yield per step. But this is actually a very good example. In modern synthetic chemistry, reported yields are, in general, not reliable. “Organic Synthesis” is probably the only consistently trustworthy source of synthetic procedures as far as the yield is concerned. Synthetic chemists in the 1950’s reported actual yields, so their procedures are reproducible. The current culture, nonsensically, equates high yields with good synthetic skills, so even if nature does not cooperate, people pull yields out of their behinds. Why? Because if they do not, they do not appear to match those who do. The same applies to forged Western blots. In old days people collected data to convince THEMSELVES that their hypothesis was sound. Now many people collect and “produce” data to convince OTHERS that their research is publishable. Having faith in one’s own research is optional.
You can tell I got a B in organic chemistry.
You make some valid points. As a frequent WB user I will try to give you my prespective.
1) You need two antibodies to detect a signal (Primary against your protein, e.g. an antibody against ERK1/2 raised in rabbit) and secondary against the primary (in this instance an antibody that recognises only rabbit proteins, conjugated to HRP).
This gets more complicated if you are using antibodies against phospho proteins. However, you probe first for the phospho protein, then strip (use a solution to get rid of the first antibody) and reprobe (use a different antibody – on the same “clean” membrane).
The secondary antibodies are rarely the problem since they are standard and used used by thousands of labs.
2) Now regarding the primary antibodies.
With western blot you can detect protein levels and more importantly posttranslational modifications (e.g. phosphorylation, but also nitrosylation and ubiqutination) with varisous degrees of confidence (depending on the target protein). For example phospho/total-ERK1/2, phospho/total-AKT and phospho/total S6 can be detected with primary antibodies (from a specific company – I won’t tell here) that have been used by thousands of independent labs, with amost 100% confidence for the specificity of the Ab (in cells and tissues).
On the other side of the spectrum commercial TRPC3 antibodies are notoriously unreliable (I would personally give them 0%) – however studies using them are published nearly every other day.
With experience accumulated over years a lab performing WBs routinely, knows which Abs to trust and how to determine which new antibodies are most probably reliable.
3) Now to your not stupid at all question. I think in order to reply properly I need to explain which are the advantages of WB and why it is abused so often, always in my humble opinion.
Very briefly, in comparison to alternatives (such as MS). WB does not need very specialised/expensive equipment, it is fast and relatively cheap to perform (it takes 2 days to get your result including loading controls for around $40) and if an expert is teaching you and you are fairly ok with your hands, it will take an undergrad student 2-3weeks to do it properly (it takes more to learn how to interpet your results and to include the proper controls). If the primary antibody is decent you also can use cells that do not overexpress your target protein (a much more physiologically relevant condition). Also very importantly, WB has built-in its own internal control (loading control and total protein in the case of post-translational modifications). Also you have a physical medium (film) that you need to physically store for a minimum period of time post publication (where I work for 10years) and be able to produce if requested (e.g. by concerned editors).
For all these reasons WB is one of the most widely available techniques in life science labs and the method of choice for SEMI-QUANTITATIVE protein or posttranslational modification determination. In my opinion, because of that, simple statistics dictate that it will also be one of the most abused techniques. As already someone mentioned above, because in any decent publication you need to report your loading control(s), unlike techniques where raw data are not reported or they are so massive that would require a huge effort to validate, any reader with some experience can detect “manipulations” and raise questions. Of course “clever” fraudsters can always cover their tracks, but in my opinion that will require equal effort to doing the experiment properly. Now if they are reporting completly made up results then eventually they will be caught.
Alternatives, such as MS have many advantages, but require very specialised and expensive equipment, months/years of training are very laborious and take forever to produce a result that in my opinion can be equally or even more dubious (because of the complexity of the instumentation) than a poorly performed WB. Also, because you end up with massive data sets that are laborious to validate independantly and require complex -computer assisted statistical “manipulations” in order to make any kind of sense, in my opinion they are even more prone to abuse (see the Anil Potti saga with his massive micro array data).
Now is there a solution to make western blots more reliable? . Unfortunately, in my opinion there is no magic bullet. Attaching whole gels in the supplement could make it more dificult to fake, but again a determined fraudster can do it. All the steps of the scientific process (teaching, designing experiments, performing experiments, reposting said experiments) relies on trust between a number of participants (PhD students, postdocs, supervisors, editors, reviewers). In my opinoin, we either need to slow down the process considerably (and also increase the cost) or rely on science itself to self-correct by exposing fraud due to failure to replicate.
Apologies for the wall of text
I think this reply is very clear, thank you for writing it. I am interested by the point you made, that you keep film for 10 years… we have been taking electronic pictures of our blots for several years now. There is no ‘film’ anymore in our lab – everything is electronic. This also means that between the time the student collects the images and it reaches me, there could be any number of electronic manipulations. This is one reason I hate westerns… it is not that I don’t trust my colleagues or my students, I do. But I am aware of the pressure they are under, and what that can do to some people. I have found that there needs to be even MORE oversight with advanced technology…
Thank you for the long and revealing answer. Monkey is better informed now.
Well trained, but still a monkey.
Yeh i don’t have much more to add to this, I think WB did a great job with his/her reply.
To respond more directly to Monkey (lol!), it is a shame that you have the impression that it is a messy and unreliable technique. It just is not. However, it is a technique that wears its heart on its sleeve and puts itself out there like few modern techniques do these days. You have to admire that about the Western blot.
It will stand up to this test as it has to every other attempt to dethrone it.
I’m sorry, I didn’t mean to imply that I thought WB is unreliable. I’ve been depending on Western Blot HIV antibodies for, uh, 25 yr.
It IS a bit messy, I mean, there’s water and chemicals involved…
signed, the Monkey
I think the doubts that you had regarding the technique have been very well answered. Just to clarify regarding the comparison with chemical synthesis yield and WB … in chemical synthesis your total (final) yield depends upon each reaction and therefore it decreases with the number of steps that you have. In WB or other antibody related techniques a 2 step process actually gives better ‘yield’ than single step. It is called as signal amplification .. a single antigen is bound by multiple antibodies and each bound antibody molecule in turn is bound by multiple antibodies (primary antibody being the antigen now) with each of them having the enzyme tag. For some applications people do use antibodies that have the tag directly on the primary antibody but if you have a common secondary antibody then the need to tag each primary antibody is not there.
@Ivan: Since you have not responded to my request to (at least) publish an erratum to your piece, perhaps you would be kind enough to suggest to the editors of LabTimes that they publish a counter to your article? I would be happy to write it in collaboration with anyone else on here who is interested.
Some points on your response:
1) “It’s not clear to us where we “trash” the technique. We were raising questions. That’s what the scientific method is about. Solid techniques will prove themselves. In fact, we write of Westerns:”
I would say the title itself makes it clear where you stand on the point and your article does not, despite your assertions to the contrary, present a balanced argument in any way shape or form. It is incredibly negative and one-sided and full of loaded statements, such as:
“Pictures of the gels that result might be highly valuable for researchers trying to demonstrate an effect but they also seem to be the method of choice for image manipulators trying to make their results look better than they really are.”
Method of choice? You say nothing regarding the fact that WB fraud, by nature, is much more likely to be detected over many other similar techniques.
And, my favorites:
“a Retraction Watch reader suggested that maybe the blots themselves are the punch lines: “Western blots have become a joke in published papers due to the failure of researchers to perform proper controls/ replicates and the failure of reviewers to demand this data”
“Our commenters, including the one who thought they had become a laughing stock of the lab”
Where are the quotes from the posters who defended the technique, just like they are doing here? Western blots are not a “punch line” in my lab and they are used every single day of the week. You can’t just ignore the other side of the story.
2) “A detailed comparison to every alternative would certainly be useful, and we’re happy to see that discussion continuing here, but we chose to write about Westerns because they have come up so often on Retraction Watch. Requiring that every criticism of a particular technique mention every criticism of every other technique is really only useful as a way to forestall discussion.”
But you didn’t even approach the topic of WHY they have come up so often. It was a major mistake on your end. It is not because they are more or less “trustworthy” than other techniques, but rather it is easier to detect because of the nature of the technique and the way the data is presented. I feel like a broken record.
3) “The idea that our solutions are imperfect is a valid one, but it hardly seems like a reason not to push for them. If journals aren’t living up to their own requirements, then let’s hold their feet to the fire. We do that every day at Retraction Watch.”
Again, you didn’t even do the right research and highlight that the journals are NOT doing what they say they are. That should have been part of the story and you would only need to look at this weeks Nature to know that. Everyone who reads this article will be thinking that Nature is awesome in its quest to protect the WB and that your solutions are practical. That is just not true and it should be clarified.
4) “Ivan ran them in labs starting the summer he went to college. But we don’t agree that it’s necessary to have performed a technique in order to report on it”
So neither of you really has day-to-day experience running the technique. Got it. I agree that you don’t need to be an expert to REPORT on it, but you didn’t REPORT on it, you presented a one-sided, negative and incredibly dogmatic opinion piece.
So, step-up to plate and publish a correction or let some of us publish the correction ourselves in LabTimes. Make it happen.
Dave, thanks for your continued comments. We did respond to your request and other messages, as you make clear by quoting our comments.
Where we seem to differ is on whether there are factual errors in the piece that require a correction, let alone a retraction. You’ve made your case for what you would have included in the piece, and where you disagreed with it, but those aren’t grounds for either an erratum or a withdrawal.
I’m sure the editors of LabTimes would be pleased to have a letter to the editor from you — http://www.labtimes.org/labtimes/contact/index.html — and we’d welcome more discussion of these issues in their pages.
Fair enough, I have made my point. I just think you should have exercised much more caution with your article and I am disappointed that you chose to go this route. I hope at the very least that you guys will do your homework a little more diligently next time you decide to write such an article.
email has been sent to LabTimes.
Don’t you have some students to mentor?
I’m sorry, did you have something to add to the debate?
Not so much the debate, but your overreaction and abssurd request for an erratum or a correction. From:
http://www.labtimes.org/labtimes/about/index.html
“Readers appreciate its magazine style and high-level journalism which is marked by independent and investigative reporting, profound and critical analyses. Written in a lively and entertaining language, it possesses a human touch that is rounded off with a good deal of humour.”
NOTE THE FINAL SENTENCE!
The piece they wrote was perfectly appropriate for the context (lively and entertaining language) of the journal, and you had a melt down, demanding an erratum or a correction. The tone of your response was completely unwaranted.
@NMH – “high level journalism” does not entail verbatim reproduction which is what was done if you read the part about high moral grounds adopted by Nature, it would fall under the category of propagating the false pretense by Nature. “independent and investigative reporting” does not mean basing the article on one uninformed blogger comment. Where was any investigation before putting together that article .. neither did i find it in terms of discussing the technique or the practices followed by the journals.
This is not for humor, it is completely misleading esp for people who have little knowledge about the technique. It comes across as if the technique in itself has issues that make it more prone to manipulations when the opposite is true i.e. manipulations with it are easier to catch as compared to others.
Those who claim to hold moral high ground and criticize factually incorrect or manipulated publications should be held just as accountable when they do the same.
““a Retraction Watch reader suggested that maybe the blots themselves are the punch lines: “Western blots have become a joke in published papers due to the failure of researchers to perform proper controls/ replicates and the failure of reviewers to demand this data”
“Our commenters, including the one who thought they had become a laughing stock of the lab”
Where are the quotes from the posters who defended the technique, just like they are doing here?”
I’ve read the article a few times and am still looking for this condemnation of the technique. I think it reads exactly the opposite. In fact, the problem is clearly stated as being a result of:
…. the failure of researchers to perform proper controls/ replicates and the failure of reviewers to demand this data.
This seems to be what is the general agreement in these comments. Papers are being published in which the researchers clearly do not understand the limitations of the technique and reviewers are letting them get away with publishing poorly done and over interpreted data.
Wow. People get so touchy. It must be difficult being objective through all that emotion.
Very little is real. Every experimental setup has caveats. Show the same effect using two different experimental setups where possible and we will be a happy scientific community forever and ever.
….and Mc Gill, why bother?
I have to say that I’m much more on Dave’s side of this. The article is very one-sided and doesn’t go into the key reason western blots are more likely to be retracted – that is…you can see the data directly. No graph can do this. If one wants to be manipulative, I would think they would manipulate their graphs as there’s no way to detect this.
And please, a summer before college doing some westerns?!? What was this, like 25 years ago? I sold shoes the summer the summer before college, but I’m not writing some sensationalistic piece trashing the latest line of Bruno Magli’s. The fact that you would even bring this up shows how out of touch the authors of the piece are. Its actually a bit insulting to most signaling labs to suggest a summer before college equates to a clear understanding of the reasons western blots can be manipulated.
In fact, from reading all the press this blog has received and the relative fame of the bloggers, one wonders if it has gone to their heads and they don’t even realize when they overstep their bounds.
@NMH – Ah, I see, it was a joke then, right? Perhaps Ivan will confirm this.
In any case, since you started it, NOTE THE FIRST SENTENCE (I’m using capitals also!). I believe it reads:
“Readers appreciate its magazine style and high-level journalism which is marked by independent and investigative reporting, profound and critical analyses”
I’m pretty sure the article failed miserably on all counts, particularly in the “investigative” and “critical analyses” sections. Funny or not, writing irresponsibly about a technique which many of us use daily and misleading readers about the usefulness of it is not hilarious to me. These “opinions” have implications for those of us publishing Western data. Sorry if that is an “over-reaction” to you, but if you have nothing else to add, please feel free to ignore this discussion.
I must say I think the article was very well made and is quite interesting where it is, and has added a lot in terms of discussing this theme. In fact RW was not the first source to question the validity of WBs nowadays because of Photoshop. This has been done for instance by Abnormal Science Blog readers.
Also I personally feel engaged by the article as I am fighting some collaborators off trying to seriously manipulate a WB in a paper in which I am coauthor. I am sure the image has been tampered with, it is very scruffily done, and they still act as no-one would notice it and like I am some idiot. Pathetic.
The article was neither well made nor was it interesting and that is the reason why it is being discussed. The problem is not with validity of the WBs but the people who think they can fudge it as photoshop is available to them.
If you consider your current problem with you collaborators don’t you see how because of the nature of data you can spot the manipulation in the WB but with such collaborators can you place trust in their other forms of data which would have been presented as bar/ line graphs or selection of certain sections from a image. Can you catch those unless you repeat the experiments yourself?
Unnecessarily the blame is being passed on to the technique which has in fact been the most helpful in exposing the people discussed on this and other blogs for their data manipulations. Catching these people and convincing the univ and journals of their frauds would have needed a lot more time and effort if not for these very blots
Rafa-
The issue is not whether it is correct to manipulate an image. It is not and people who do so should be punished. The issue is whether a technique that every signaling lab in the world requires to study their pathways should be singled out for criticism without a balanced, two-sided report. This was not a balanced report.
Indeed. Yet I feel the article is quite effective exactly in raising this issue and enraging people who do WB in their daily routine (and also like doing it and want to go on with this) as it is a fact that nowadays it is so easy and trivial to fake WBs to convince your readers that the technique is no longer trusted by many experienced scientists.
And also it is a fact that the article touched me, personallly. I like the entailing discussion, and I appreciate people condemning image manipulation. I wish my coauthors would read this and mend their ways. I want to publish my results in a clean, proud way.
What is a balanced report? Do we publish any balanced research article? Definition please.
If I read a newspaper article (not an editorial) of a Presidential Debate, I don’t expect to see a report of one candidate’s gaffes while seeing only the other candidate’s triumphs.
If I read watch a news broadcast on CNN featuring a political argument, I expect to see both a liberal and conservative commentator.
Give me a break, we all know what balanced is and any attempt to say otherwise is simply parsing words. These guys don’t even understand the scientific fields that require westerns for their research and don’t acknowledge the 99% of careful scientists that are doing their best to make sure the science is as correct as it can be. Quite simply, this is NOT a flawed technique…it is simply easier to detect those to try to perpetrate fraud when using this technique.
A balanced research article would be one where the data/ facts support the author theory/ claims. Not everyone but good scientists publish ‘balanced research article’ ….
how about a paper on a particular gene – usually we argue that the gene or protein which I am working on is the one controlling the life of an organsism – all others are trivial. How do you judge this? There are 1000s of genes, right? What about gene networks? In our hands this works…here it doesnot matter whether a good scientist or someone else, right?
Dave, Physician scientist and WB very eloquently pointed out what is wrong with the Lab Times article.
Unfortunately, Ivan takes the view “we stand by our report and there is nothing wrong with our conclusions” – I wonder where I heard that before.
Since, in my opinion, the views expressed in that article are very unbalanced and mostly based on the somewhat misguided opinion of an anonymous retraction watch poster (very sloppy and lazy journalism in my opinion), a correction is warranted presenting the other side of the story.
The main reason for this, as evident from RW posts, is that non-scientists with a genuine interest in the scientific process and non-life sciences professional scientists, based on that article conclude that WBs are somehow inherently wrong and the method of choice of fraudsters. Thus, the “logical” conclusion is that since the majority of studies in life sciences use WBs to some shape or form, a big percentage of the scientific literature is somewhat “suspicious”.
This, in my opinion, is an extremely dangerous (and wrong) message to send and damages science in general equally to a fraudulent scientific paper.
I hope Ivan reconsiders the possibility of a correction or a follow-up article. In any case, the good work that RW is doing in defence of scientific integrity is somehow tarnished in my eyes.
Thanks for the hospitality.
In response to LNV.
We still have a dark room and a film developer (hence the films). I guess your lab has moved to a phosphoimager setup.
Maybe asking your staff to capture an image of the whole membrane before any cropping or re-orientation could help. These images would need to be stored for lets say few years and each person that leaves your lab will be required to give you a CD with these images along with their lab books?
In your lab meetings or one on ones you could randomly ask to see the whole image once in a while. If you explain to your staff why you do this I do not think they will feel “suspected”. Like you, I mostly see images, but every now and then I ask to see the whole film (especially if something feels wrong).
I hope that helps
Yes, WB Biochemist, that does help. And it is quite similar to the system I have in place with my students. Images are captured on a common computer (backed up to a server), so there is always a record of the initial blot (or gel, depending on the experiment). That computer does not have photoshop or other imaging software, so any cutting, adjusting, etc. is done on individual computers. It is not fool proof – just sitting here, I have thought of a way someone could falsify a blot/gel and make it seem like it was collected that way. But that, in my opinion, is much less likely than someone purposefully running the wrong samples to get the result they want. I like what you say: trust, but verify. And in the one instance where I was quite certain we had a habitual liar in the lab (not about results, but about other things), I trusted nothing.
Modern Part 11 compliant software makes it impossible to modify files without creating a documented trail of all changes. Initial investment is not cheap, but IMO the payoffs down the road make it worth the up front cost.
Glad you found my comment helpful.
Just a minor point. I think it is useful to insist your students/staff to capture the whole membrane. Sometimes an overjelous student just focuses on the part of the gel where they “think” the primary band is, without any malicious intend. This could sensor useful information, like non-specific bands but also information about homologous proteins also recognised by the Ab. I think the story of how the Sabbatini lab discovered mTORC1 is very instructive
Over and above the issues raised in the (IMHO nicely written) article by Ivan and Adam, there are some other important issues regarding western blotting, which are routinely ignored by authors.
1) The dynamic range of most traditional western blot methods (i.e., ECL plus film) is about 10 fold. In reality it is more like 5-7 fold. It is simply impossible to measure a greater than a 5-7 fold change in the level of a protein using such methods. Some of the newer quantitation methods such as phosphorimager or LiCor offer a slightly wider dynamic range, if used properly (which is frequently not the case).
2) Saturation and quantitation. If the center of the band is black, the blot is oversaturated. Quantitation cannot be performed on saturated blots. The plot of density down the length of a WB lane (densitometry plot) must yield Gaussian type peaks, not flattened at the top, in order to be linear.
It constantly amazes me when reviewing manuscripts, to see totally black filled in bands, often several millimeters fat, which are then used for quantitation. In one case I alerted the authors to this problem, and their response was simply to dial back all the blots in powerpoint (using brightness/contrast), resulting in gray fat bands instead of black fat bands. Their bands would still be flat-topped in densitometry profiles, just not flat-topped at 100% on the y-axis. Despite my protests on a second round of review, the article got published anyway, indicating the editor didn’t really understand the problem either.
So, it’s not just that western blots are open to abuse, it’s that the method is often not practiced in a manner appreciative of its limitations.
Great post and I totally agree with all of your points. Your comment about saturation is very important. When I was a grad student, my mentor drilled this into me every day. Black is black is black. You cannot quantify the difference between black and black.
Now, often this is negligence rather than intentional fraud, but this raises a really important question I think – what is the difference between negligence and outright fraud when it comes to techniques? Seems easy to answer – intent is the key – but at the end of the day, does it really matter? Believe me, if you could see the raw qRT-PCR data in most manuscripts, you would be absolutely horrified by what you saw. Is it fraud? In most cases probably not. Does it matter if it is fraud or not in terms of retractions etc? I doubt it.
What a lot of us are saying here is that if you are going to write an article about Western blotting, these kind of discussions should be given consideration. Not one person here is denying that WB fraud occurs, or that sometimes it is intentional. Nobody is saying either that WB does not lend itself to easy fraud because of the nature of the data. What we are saying, however, is that it is not any more or less “trustworthy” than any other technique out there. It is a very subtle but very, very important distinction to make…….and it should have been made. Period.
Can we pin this post up to the door of every single laboratory in the world please?
I once had the job of listening to a host of student talks – I seem to recall one chap who was adamant that because he had reproduced the same results as the labs new post-doc his western blotted densitometry work was correct. All his Western blot bands were saturated and when the post-doc was challenged it was quite clear he had no idea about basic densitometry.
All too common I’m afraid.
@MM – I did actually read that article and I know where you are going with it. Of course pre-clinical research needs to be performed more rigorously, that’s a given, but it is not as simple as people messing up or faking a few WB here and there. There are many more reasons why lab findings do not translate to the clinic and, coming from someone who is right between the two fields, I would argue that mistakes in the lab are clearly not helping, but are not the main culprit. What are some of the other issues?
1) Journals – too willing to publish preliminary or incomplete stories
2) Massive publication bias, usually fueled by the big journals, overzealous editorials, comment articles etc
3) Big pharma moving too quickly based on scant, but trendy research findings. They should wait for more robust replication. Often this takes years and their bottom line does not like that.
4) Huge differences between model systems and limitations in human studies.
I could go on and on. In sum, poor lab standards definitely have a role, but there are so many other factors that come in to play. I thought at least Nature did a decent job of covering all the angles. I would have liked to have seen an article from some basic lab guys, but nevertheless it was an interesting set of papers.
Dave, I’m curious about your comment #4. I just arrived home from my first the AACR national meeting, where the only thing more prevalent than sloppy analytics and academic wishful thinking was the mandated pre-talk self-flagellation requirement for anyone with industry ties. Given that experience, I’m compelled to ask whether you have ever spent any time in “Big Pharma?”
Are you aware of any of the regulatory requirements/industry best practices for quantitative immunoassays and biomarker methods? It appears not, so I pasted some links below. The underlying principles described therein could easily be applied to academic research, but most academic labs seem to be so hell-bent on publication that they ignore pesky controls and opportunities for orthogonal validation on a regular basis.
Given the speed with which it is possible to develop and characterize antibody pairs for IP-WB (and thus ELISA or other immunoassays), there is really no excuse for using 1980s era western blot/enzymatic detection methods in submissions to top line journals. No excuse, that is, except that they need to quickly publish sexed-up, bogus stories to lure those dottering industry fools into opening their checkbooks.
Pp
http://www.springerlink.com/content/p15124657g231008/
http://www.ncbi.nlm.nih.gov/pubmed/22415613
http://www.aapsj.org/view.asp?art=aapsj0902017
chirality’s comment hits the spot:
http://www.retractionwatch.com/2012/04/03/can-we-trust-western-blots/#comment-12606
i.e. For a half-way decent scientist the experimental data should be sufficiently robust that the scientist him/herself is convinced that it justifies the interpretation. Then it should be straightforward to convince others.
Why should a scientist wish to ensure that s/he is convinced by the data?:
(a) because s/he has decent standards of quality
(b) because s/he recognises that experimental science is incremental and publishing stuff that is wrong effectively puts a break on vertical progress towards more ultimate goals.
I totally agree with Dave, WB Biochemist and Physician scientist and other similar comments on this thread. Any technique is susceptible to abuse. The people most able to assess this are the individuals that use the technique themselves. It’s very easy for individuals outside the subject to see examples of wrong-doing and to consider that the problem is not so much with the cheats (or the easily mislead) but with the technique…
..and that’s a problem with blogs. Need to be careful that the focus on a particular aspect of science (retractions that result from cheating or honest mistakes) doesn’t drift into zealotry.
On a positive note, this thread has been very instructive….
Exactly! This is my point. I loved the post.
@Ivan and Adam
I wonder if you will apply the same standards when you report on some paper in e.g. Nature news and views or on Lancet Newsdesk in the future.
I am happy to announce that after much attrition I have been able to prevent a collaborator from submitting a faked WB. I was sure he had manipulated the image, and showed clear indications of the fact. Most probably, I lost a collaborator. But Science did not earn another faked WB to the pile.
I would advise the above incident be reported to your institution. I had a similar situation recently (found faked data in a paper I was a co-author on, before it was submitted), and went straight to the chair of the Department. Post-doc’ (lead author on the paper) admitted wrongdoing and is now unemployed.
My advice is it would depend on your status and the status of the collaborator who submitted the fake data.
If you have some seniority and the other scientist does not, then you might risk it. If you are a PhD student or junior post-doc and that other person is head of a group or well established, you are taking a big risk. Even if you escape any immediate blowback you will still end up being very unpopular where you are and may also find getting another job not so easy. Don’t like to be negative, but these are the realities of the situation.
If the collaborator is a foreigner or disliked, your chances might be better.
Thanks for the advices! Yes, I am in a lower position and the responsible person is actually the head of a group. They have very scruffly faked a SDS-PAGE gel image into looking like a WB, as to make them a good “confirmatory pair” for their results. I never directly accused them of faking it. I merely found another unaccounted problem with the image and insisted on getting that changed. After much moaning and insistence I have made them use another SDS-PAGE image which is reliable and forget about using any confirmatory WB. The head of the group tried several excuses and explanations, but my behavior probably made him realize that it would eventually become evident that the image was faked, and now surely sees that I did not fall for the blunt forgery.
I am sorry, but I cannot report him to the local institution. This is South America. A quick glimpse over this blog will illustrate how our institutions deal with those accusations. And yes — the institution is behind cases shown in RW posts, actually in more than one. For a start, the institution responsible person will have no idea whatsoever of what a WB is, and will not move a finger towards knowing about technical details. Everyone would claim I do not know bollocks of what I am talking about, that the results I saw were not final, there was some silly mistake, I was jealous about their prestige, etc, etc… and then strive to prosecute and persecute me in return.
I know now that this group has made similar maneuvers in the past. They hit high profile periodicals. They get lots of funding in return.
I knew it was about time I left this country…
Bad science is bad science, and fraud is fraud, no matter where it gets committed. Location should not matter (even though unfortubately it does in some cases). Regardless, there are ways around this, such as agreeing to be on a paper and then independently contacting the editor to alert them to the fraud. Maybe the investigator will act a little more intimidated if they get the message from an “authority” such as a journal editor, rather than a lowly colleague. When in doubt about your own strengths, use the power of others to do the work for you.
I agree that flat out accusations can sometimes be a bit offensive. Careful use of language can be useful in such cases… “Hey, you might want to double check that figure, because if the journal thinks it is troublesome, they might report you to the funding agencies. You can thank me later for saving you this trouble”.
At least I prevented them from commiting this one fraud, and having me as a coauthor. For that, I am very happy.
@Vhedwig – “Post-doc’ (lead author on the paper) admitted wrongdoing and is now unemployed.” This is a good result. Surely youre not from South America, I assume? I would preffer being where you are.
@Vhedwig Thanks for the tips. Yes, you are right on both. A friend has used your first idea, and managed to earn one of the very retractions figuring in this blog. Still, everyone here says the poor 1st author did not know plagiarism was wrong, and that the others were not aware of the problem (a 90%-long plagiarism) when they signed the paper. The retraction came out only to satisfy the foreign offended party. Also, my freind told me that he attempted that other times, but in most cases journal editors will not answer anonymous emails. Thus, still the system is quite resistant to remedying fraud, especially in South American institutions and young, small periodicals.
Concerning using gentle language, sure, this makes things much smoother. Still I think in my situation it would have not helped, as the fraud could only be detected if the editor knew exactly what to ask and look at — usually peer-review is not THAT efficient — thus it would have carried the message that I was going to blow the whistle… It is a very delicate position, trying to stop someone from doing straight-out fraud without telling him off… I hope never having to go through this again, will try being more careful about collaborations. And will surely move somewhere else, question is how.
I don’t know anything about western blots, but as a diagnostic anatomic pathologist I interpret IHC on FFPE tissue on a daily basis. I am also involved in translational cancer research using tissue-micro-array for rapid IHC screening of novel biomarkers. My experience has been that IHC is done well in diagnostic labs but always poorly by researchers. Industry validation has to attain diagnostic proficiency levels of quality assurance and I can say from personal experience that tends to be superior to any kind of scientific validation. I am tired of researchers who complain about the unreliability of IHC, when diagnostic labs can produce consistent quality staining which we use for cancer diagnosis and prognosis on a daily basis.
My question is about western blot.. I am MSc student from Srilanka. I am trying to repeat one of the western blot done for protein of interest in my lab. It has been done in the past by my senior post doc. I am trying to repeat the same experiments, but I am not getting the same result. I was wondering, how detect the fault in the previous experiment. If some one has loaded extra protein sample or less protein sample deliberately to show up regulation or down regulation of protein, how do you detect such errors. Off course, if the loading control are not from stripped blot, how can we detect such errors. I don’t think so all lab does loading control from same stripped lanes. Thanks in advance for your suggestions
It is a plaque we observed in our capacity as chairman of promotion committee. Also 0bserved this as an editor to Egyptian biochemistry and molecular biology journal (EJBMB). We insist on having the whole photo of the gel ,if the paper being accepted for publication.
We must take real action to prevent all sorts of misconduct. It is spreading as a hell in developing countries and no real actions are taken by the universities.