Researchers retract breast cancer study after realizing they were using the wrong antibody

br j cancerA group of researchers at Istanbul University has swiftly retracted a paper they published in March in the British Journal of Cancer once it became clear that they were using the wrong antibody.

Here’s the notice for “Clinical significance of p95HER2 overexpression, PTEN loss and PI3K expression in p185HER2-positive metastatic breast cancer patients treated with trastuzumab-based therapies:”

It has been brought to our attention that, as a result of a miscommunication, the antibody used in this study in order to determine the expression of p95 HER2 in metastatic breast cancer patients is in fact directed against p95 NBS1, a component of the MRN complex, and is completely unrelated to p95 HER2. Therefore, a relationship between p95 HER2 overexpression and outcome cannot be established based on the results described and we wish to retract our paper.

The authors, the editors of British Journal of Cancer, and the referees of this paper are grateful to colleagues in the field who have brought this problem to our attention and we apologise for any confusion that has, inadvertently, been caused.

The case — the handling of which we applaud — reminds us of another retraction involving the wrong mice.

20 thoughts on “Researchers retract breast cancer study after realizing they were using the wrong antibody”

  1. Why should we applaud this type of blatant failure.

    We have the tendency to equate substandard review with certain type of publishers which are better not named. Here we have a genuine impact factor over 5, Nature Publishing Group journal which has, supposedly, a well run and efficient if typically a bit slow review system, and the reviewers did not notice that an antibody to a totally different antigen was used. This was quickly enought noticed by a number of readers.

    This should be a case where the Chief Editor is making a statement on behalf of the journal and telling what type of an action he will take to ensure a clean scientific record without retractions.

    And what we learn? The anonymous editors and reviewers are just grateful to the readers who did what they should have done. No remorse. The only lame apology is for INADVERTENT confusion. The editors are learning quickly from the retractionists who always compose their layout from wrong blots inadvertently. The business teen continues as usual.

    No aplause from me for this one. The responsible editor is hiding like a schoolboy who has broken a window.

    1. The way I read the retraction notice is that there was a misunderstanding about what antibody they had acquired from someone else (the “miscommunication”). It was either this colleague, or someone else with intimate knowledge who pointed out the mistake after publication.

      I’m sure you know that it is impossible to tell one antibody from another by eye. And if they both bind a protein of about 95 kDa, you will never discover the mistake. It takes someone with specific insider knowledge to point out the error.

      1. I haven’t read the paper but assuming that it was readers who picked up on the error, then it would be logical to suggest that aberreant immunolocalisation of the immunohistochemistry for HER2 was presented in the published figures. the NSB1 antibody is a nuclear stain, while HER2 is assessed based on completeness of membrane staining. These two cellular compartments are easily distinguished on routine slides. In fact pathologists rely on correct immunolocalisation of signals as quality control.

        If this was the case, then it’s gross error (akin to using the wrong gel for the wrong experiment) which should’ve been picked up at any aspect of the project.

          1. The paper is paywalled to me, but I can see your link to the figure. How strange.

            Anyway, I notice that the retraction statement refers to “colleagues” rather than “readers”. But I was assuming westerns rather that immunolocalization.

      2. Dear Dan,

        I value your insight. With great erspect, I still feel the mistake definitely should have been avoided. In the field of surgial pathology, you must be certain of the specificity of your antibody. No exceptions.

        Thus:

        1. You refer to the paper in which the specificity of your antibody is established, whether commercial or obtained home grown from a colleague, and check the packaging and data sheet if commercial.

        2. If there is no such paper, you will test it yourself and publish the evidence along with your primary results. You will also test it if you get an aliquot without packaging from another lab.

        3. If you review a manuscript, you will check that #1 or #2 is fulfilled, if not you will ask for revsiion to that effect; if the data are not provided, you will reject the revision.

        I did not read the notice in the way that you did. They could always be more specific as we likely agree, so that everyone would get the message in the same way.

        It turns out your reading may not have beenbetter than mine

        This is what they state:

        “After titration, primary antibody was added manually: rabbit monoclonal [Y112] to p95 NBS1 (Abcam Inc., Cambridge, MA, USA. Cat no: ab32074, lot no: GR50908-1, 1 : 100 dilution rate).”

        So it was a commercial antibody and no acknowledgements to outside parties. The miscommunication may have been internal, e.g. when ordering the antibody or when instructing the technicians.

        In my sincere opinion, the reviewers should have observed that NBS1 is not HER2.

        Also, one would expect that a reviewer qualified to review this paper would know that HER2, one of the most widely studied antigen in surcical pathology, is a membrane, not a nuclear antigen. The p95 variant additionally is found in the cytoplasm, but not in the nucleus. What the paper shows is EXCLUSIVELY nuclear localization (p95 NBS1 happens to be a nuclear antigen).

        Even if they did not know this about one of the most common antigens used in surgical pathology, had they insisted on citing its specificity the authors would have found out their mistake immediately. Simply reading the manufacturers product data sheet to look for the appropriate reference would have given the answer. Of course, they should have read it in the first place.

        Thus, the process failed both on #1 and #3.

        Even if the reviewers for some reason ignored both the discrepance in the antigen studied vs. the antigen aganist which the antibody reacted and the labeling pattern that was opposite to the expected one, they should have spotted something amiss in the legend to the corresponding figure: “p95 expression positive: tumours were scored when less than or equal to 50% cells showed nucleolus and cytoplasmic staining detected with the anti-p95 antibody” for a least two reasons – primarily because there is nothing like this to be seen in the image, which shows uniform labeliing of only nuclei, and secondly because of the nonsensical legend (nucleolus vs. nucleus and positive.. if LESS than).

        They also had the red flag of the authors trying tobend over backwards to justify their findings: “Application of p95 antibody caused nucleus staining as well as cytoplasmic. This nucleus staining did not impede the scoring”. In fact, as stated, their image shows exclusively nuclear staining. You should not submit a paper if your results are not what you expect, not to make exceptions.

        Many other details tell a tale of sloppy resarch and writing and sloppy review, such as mispelling one of the most commonly used statistical tests as Fischer’s exact test, repeated use of sentences like “Thirty-three patients were found to be positive for p95 expression (33%)” for a series of exactly 100 specimens, and the discussion which does not have a word reflecting about the strong and weak points – none of which seems to have bothered the reviewers and editors.

        Thus, in spite of the thumbs down, I stand with my opinion that the editors should bear some responsibility for a blatant mistake of “inadvertently” accepting this paper.

        1. So it was a commercial antibody and no acknowledgements to outside parties. The miscommunication may have been internal, e.g. when ordering the antibody or when instructing the technicians.

          Hmph. Compare here (PDF).

          1. Your point being?

            It’s another Turkish group using the same NBS antibody as a proxy for p95-HER2. I was just musing about the nature of the “miscommunication.” There is a proprietary p95-HER2 antibody, but as I’m sure you can tell, I’m out of my depth.

          2. Well spotted. Their methods and results were so vague that I was not sure on first reading, and the images are almost uninterpretable. However, on second look, just at the end of discussion it is unequivocal.

            I am not blaming the editors or reviewers of JBUON as I was those of BJC. They probably just did what they were supposed to do: accept the paper for 380 euro flat rate.

            I agree that Istanbul may have got a hint from this paper.

      3. It’s easy enough to order siRNA and knock down a target to show antibody specificity in IFM studies. I ran that control just yesterday.

        1. Is it really that easy?
          Does siRNA always give you >90% knock down? If so I would wouldn’t mind a ref and give it a go myself.

    2. We should be exceedingly careful about being too harsh when people admit mistakes. It’s the expectation of perfection that causes people to hesitate to admit such flaws. Everybody can make a mistake, no matter how careful. And if admitting to a mistake can end your career, even the most moral researcher will hesitate to step into the spotlight.

      Mistakes are fine, false results are fine, we can weed them out if everybody is open and focuses on enabling scrutiny and reproduction of results. If single mistakes (even stupid ones) are not survivable, we are encouraging fraud, not fighting it.

      Part of the problem is that we expect papers to be scientific truth. they aren’t. At best, they are an open and honest report of something that you did, a result you observed, and a _tentative_ conclusion. The problem starts when the system encourages you to tie your reputation to the result rather than the report.

      Finally (and more to the point) peer review is not a perfect system. It’s a predictor and an aid to the editor, but it’s effectively a test with a very small sample size. There’s always a chance that a flawed paper will get through. The real test is the scrutiny that happens after publication. And again, if a result doesn’t hold up, that should not be an indelible tranish on the reputatin of the researcher.

      The only addition I’d like to the retraction is that the authors will review their procedures to ensure that such mistakes are avoided in the past (and tell us what their conclusions are, so we can all learn from them).

      1. I am not criticizing the authors for admitting correcting their mistake.

        I am aiming my criticism to the editors who in the retraction notice are avoiding responsibility for publishing a paper they should never have published and are hiding behind the back of the authors.

        I am trying to extend what you say:

        “Mistakes are fine, false results are fine, we can weed them out if everybody is open and focuses on enabling scrutiny and reproduction of results.”

        and

        “…will review their procedures to ensure that such mistakes are avoided in the past (and tell us what their conclusions are, so we can all learn from them).”

        to the editors who did exactly the opposite!

        1. To be honest, my post ended up more as a rant in general than a response to your comment. I think we agree in principle.

          This is not my field, so I can’t tell whether the mistake should have been caught by peer review, but I still think expecting too much from peer review is dangerous for similar reasons. It’s a flawed system (with some value regardless), so we should treat it as such, and still expect flawed papers to be published, even if everybody does their part perfectly. I can’t judge here whether the editors and reviewers are at fault or it’s just one of the ones that slipped through the net.

    3. Similar could happen, even in well-managed research labs, and are not always the sole responsibility of the scientists. An example: We had recently trouble calibrating our pH meters despite all kinds of futile attempts and rigorous observance of the protocol. Finally, in desperation we contacted the supplier (a respected company). It turned out that they made a colossal error during the manufacturing proccess, and shipped unusable calibration solutions to hundreds of labs worldwide… This problem was of course blatant, but it is easy to imagine situations where the differences may be undetectable. If we ostracise researchers who openly admit their failure we will definitely create a negative atmosphere.

    1. “a miscommunication”? Communication exists always between two or multiple parties. The excuse given (and thus the retraction notice) is unclear. Who miscommunicated with whom? One has to carefully check these things BEFORE an experiment is even conducted, not weeks or months later, after a paper is published. Other work involving atobodies by these authors should be carefully checked now. For all we know, they may have used bananas instead of apples elsewhere.

  2. Look on the positive side, they appear to have discovered a wholly unexpected negative correlation between over-expression of the MRN complex and response to trastuzumab.

    Who would have thought?

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.