Tsuji explains why JBC paper was retracted: Western blot problems

Last week, we reported on the retraction of a paper in the Journal of Biological Chemistry (JBC) that had one of the journal’s typically inscrutable retraction notices.:

This article has been withdrawn by the authors.

Late last week, we heard back from corresponding author Takashi Tsuji by email. It turns out one of our commenters had basically figured out what was wrong. It was, as seems to often be the case lately in retractions we cover, troubles with Western blots:

In September of 2011, an editor have informed me that the JBC office have received complaints concerning our published paper in JBC from a reader.  We have checked the indicated figures in compared to those source files according to the complaints carefully and found our mistakes.

We have displayed the data of Western blot analysis for beta-actin and we had used the same data in several figures by our mistake.  After the Western blot analysis for many samples, only the image for target protein was transferred to figure file by the trimming of the source file.  In these processes, those mistakes would be occurred.  We checked original source files and experimental notebook carefully, and confirmed the existence of those original data with reproducibility.  Furthermore, the meanings and explanations about our results in our paper are correct and need not to change the explanation in response to the replacements of those figures, even if the incorrect WB is replaced with correct one in those figures.

However, we should sincerely apologize to being our careless to prepare those figures and troubling on readers of JBC.  Thus, we have withdrawn our paper by ourselves.  We do not  plan to withdraw another our published paper in near future.

16 thoughts on “Tsuji explains why JBC paper was retracted: Western blot problems”

  1. I’m unfamiliar with normal practice in academic publishing. If the authors wish to retract their paper is it more or less a done deal? Reading this explanation, I kind of want to hear that the editors encouraged the authors to publish a correction note, rather than making a full retraction, as the explanation seems to point to an illustration error, rather than fraud or misleading results. Given some of the less credible “corrections” in lieu of retraction that have been documented here, this seems to be the other extreme. Or is my naivety leading me astray?

    1. I’ve never been part of a retraction, but my understanding is that once a PI decides to retract, it’s largely a done deal. Retractions are generally embarrassing for a research group. It indicates that there is something so flawed about the work that it should not be part of the literature.

      The error is not simply “illustration.” Western blots are not drawn. They are images taken of the results. These images are supposed to be a concise presentation of the raw data, so any manipulation that affects their reliability makes them suspect. JCB had a rather well-done article about this a few years ago:
      http://jcb.rupress.org/content/166/1/11.full

      It’s a little like a crime scene lab using photoshop to combine an image of a ruler with a footprint. The scale of the footprint may not match the ruler, making the footprint seem like it belongs to a larger or smaller person than it actually does. The interpretation of the footprint becomes unreliable, and the behavior is quite unethical.

      I don’t know if this paper could have been corrected, but it seemed like more than one figure was affected. In a science paper, figures are selected to convey the most important data, so this probably indicates that several key results would require correction. Many groups will choose to retract in this case.

  2. Malcolm Tredinnick,

    I do take your point in general.

    In this case this might be the important part:

    “we had used the same data in several figures by our mistake”.

    How many does “several” mean?

  3. Whenever I see Westerns spliced together from bits and pieces I wonder what was wrong with the gel… Extra bands? Non-specific signals? Degradation? And how much longer various Western blot afficionados would invite readers to eye-ball their figures? We have been quantitating gels as long as Phosphoimager was available, but I still see folks running around with Kodak film… Sorry for the rant, but it seems like every other time there is a retraction it’s a Western blot. Either mislabeled or photoshoped wrong, deliberately or not.

    1. I think that uncropped, full images of westerns should be required to be submitted as supplemental material. I have seen people using cropped western blots and then found out later that they were showing a nonspecific band and not the actual protein of interest!

      1. Wouldn’t be so bad. At least when the conclusions of the paper are based on a single or a couple of Western blots.

    2. Western blots have become a joke in published papers due to the failure of researchers to perform proper controls/replicates and the failure of reviewers to demand this data. At a minium, relevant bands should be quantitated (e.g. Licor or phosphoimaging) with n=3, normalized to a reference protein and reporting mean and SD. No conclusions without p values. Entire gels need to be shown and should include controls for antibody specificity. All specifics about the detection method should be reported (time exposed to secondary antibody, antibody concentrations, time exposed to developer, type of film, etc.).

    3. Would agree to most of that. Westerns have been spliced together for decades now, I guess it was and is to make everything look nicer and to save space. With online space nearly unlimited, the online supplemental comes into place here. I guess it is Nature Cell biology that requires the original Western blots to be added in the online supplemental. http://www.nature.com/ncb/journal/v6/n4/pdf/ncb0404-275.pdf I think that is a very good idea.

      As to the film, at least an original Kodak film is more difficult to tinker with. With the all-digital Phosphoimager data, you don’t have any “physical evidence”, unless the lab installs tight IT security. Perfect would be for the lab to save primary electronic data from phosphoimager, real-time PCR and what not in an unchangeable primary database, letting anyone (including the PI) only work with copies of these files. All this doesn’t help initially (unless you make everyone submit all raw data and notebooks together with the manuscript), but it would help clearing fraud allegations later.

  4. The question whether honest errors should be corrected or should lead to a retraction continues to be an open issue. Contrary to what some here in this blog might say, there is no clearly defined line.

    If you correctly performed and reproduced an experiment (lets say as shown in Western-Blot A, including correctly performed re-probing of the blot for loading control), if you correctly reported the results in the paper and correctly discussed it, but erroneously added the loading control from Western-Blot B into your figure, should the error be corrected in a corrigendum or should the paper be retracted?

    If you take it to the extremes: What, if that paper described the definitive cure of all cancer. Would you still want to retract it? Ethics are never free of context, even though principle-ethicists don’t like to acknowledge that….
    So you say: Doesn’t matter, still needs a retraction. OK, so the Nature paper is retracted. What to do with the findings? There is the next group that just reproduced the results and just before submitting their results to journal XYZ, they see the retraction notice. Now what, do they have original unpublished data now? Technically, the scientific record has been cleared, so they should be able to report their groundbreaking “new” results in Nature. Who will be the Nobel laureate, the PI of the second team?

    I know this is really pushing it and in reality most cases are a lot clearer. Nevertheless, I just wanted to show that there is not always pure black and white, there often is plenty of grey around, too.

    1. A singular “definitive cure for all cancers” is extremely unlikely, even more so it will be disclosed in a paper based on a couple of Western blots. But of course you are speaking hypothetically, right? Not really. It is not a hypothetical argument, it’s demagoguery. Nobel Prize distribution is often debatable, but I can’t remember anybody loosing out on it based on wrong control on a Western blot. As an aside I might add that it might do a lot of good for the quality of experimental science if people would be scared on loosing the Nobel due to a sloppy work. Alas, it is not the case.
      The issue of errors and retractions is quite simple, actually. If the errors invalidate the conclusions, then the paper has to be retracted. If the errors are so pervasive that the overall technical level of the paper is substandard according to the journal policy or the consensus in the field, the paper has to be retracted. If the impact of errors is marginal, then the errata etc. come into play. Any editor worth half his/her salary can deal with that. And their job is not to preserve the future chances of sloppy scientists to get a Nobel, but to protect the literature from the flood of slops.

      1. @Pymloaddict:
        “A singular “definitive cure for all cancers” is extremely unlikely, even more so it will be disclosed in a paper based on a couple of Western blots.”

        Sure. Replace “cure for cancer” with something less grandiose. Perhaps… “clear marker for Alzheimer’s that shows up twenty years before symptoms.” genetic’s point was that retractions are not free of context. If the work was solidly done and clearly important, but sloppily presented, the line may be a little blurrier.

        “The issue of errors and retractions is quite simple, actually.”
        “If the impact of errors is marginal, then the errata etc. come into play.”
        “Any editor worth half his/her salary can deal with that.”

        If these statements were clear-cut, we wouldn’t be seeing or debating mega-corrections in the literature. What’s marginal for you is not marginal for someone else, and vice versa.

        I’m also not sure that “marginal” is the right criterion for this. If the mistakes were strictly in the presentation, and not the original research, I think readers would want to see the corrected figures. Whenever scientists read a paper, they judge its quality; the fact the work had to be significantly corrected would certainly factor into that. If the paper is retracted, the corrected work may never be shown to the public.

  5. Let’s try to not blurr the lines unnecessarily. Why in Valhalla a solid work would be sloppily presented? Solid work isn’t just sound design and expert pipetting, it is also the relevant controls, gels that don’t require photoshopping, mass specs that can be shown in public, sequencing all the constructs, etc. If the work is done right there is no reason (outside the cases described in DSM-IV) why its presentation should be littered with errors. And to give you something to ponder upon: for some missing out on a Nobel, a patent, or an interview with NYT science reporter is the worst case scenario. What about women getting mastectomy based on erroneously assigned markers for hereditary breast cancer? Kids pumped with anti-psychotics that turn them into fat suicidal zombies? If you want careers – it’s easier to relate to for some – what about graduate students and postdocs spending years in blurred-lines-making-mutants-that-“make no sense” limbo because of a crappy structure published in a high profile journal?

    1. Everyone is prone to error. If you never err, you don’t work. Errors are made in the design of a study, in the procedures, and of course in the presentation of the results. And then there is fraud, which is a completely different story. Both honest errors and fraud can have detrimental effects, no doubt. In that respect, science is no different than the rest of life.

      Why solid work is sometimes presented sloppily? The more competitive the work in the lab is, the more pressure there is to finally bring the stuff out. You never had a PI yelling at you for still not having the manuscript out? Or are you the head of the lab for too long now to still remember how that was? While experiments cannot really be speeded up terribly much, the writing of the manuscript is often done under massive pressure. I can fully understand where these errors come from.

      Yes, the cropping and assembling of Wester blots and other primary data is error prone. From all I can see, it does appear that the loading controls are more often affected by errors in this process. And one obvious reason for this could be that it just appears to be the least important part of the figure (allthough I agree it is usually just as important as the rest…..)
      And yes, it would help to require submitting images of the uncropped blots, with little frames drawn around the region that appears in the final figure. That would force the writer to provide a unique blot for every lane/lane set.

      As a conclusion, I agree that the vast majority of errors in figures with reused controls etc. are very fishy and I also primarily distrust such a publication. Nevertheless, if the story, the results and the conclusions of research work are true and honest but there has been a mix-up in the presentation, I still believe a correction is a viable approach. In fact, if you work on the same field, you would want to know from the authors if the story is still true or not. Whether you believe them or not is then your personal weighting.

  6. Science is self-correcting to a certain extent, and thankfully so. Not only do the errors of commission need rectifying but so do the many badly designed experiments that permeate the literature. I can’t count the number of times I have seen data published with no controls at all for loading. Is this not as bad as the mistake of putting the wrong loading controls in the figure? One has the potential to be fraudulent but both are equally wrong and diluting of the literature. I see that as a much more pervasive problem, just badly designed experiments. Can’t someone just publish a guide that says “this is how you do a controlled experiment for determining the amount of a protein by western blotting or a transcript by RT-PCR” and make it mandatory reading for students. That would be a big contribution. Some sort of guide for students on how to do good science.

    1. There are so many books and reviews and what not out there describing how to do this or that….There is no universal protection against human stupidity. And peer review is also only of limited use against stupidity. The shortcomings of the system are well known and discussed. All this is no new phenomenon, although increased “publish-or-perish” pressure plus the explosion of numbers of journals certainly didn’t increase quality……

      Best protection is to have a basic distrust in anything you read and a clear, sharp mind to put everything into perspective.

  7. In reply to Jane’s Addiction February 7, 2012 at 9:02 am

    Dear Jane,

    When you write “I can’t count the number of times I have seen data published with no controls at all for loading” could you give some examples in the literture?

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.