About these ads

Retraction Watch

Tracking retractions as a window into the scientific process

Scientist whose work is “not fully supported by the available laboratory records” to retract 8 more papers

with 13 comments

SK Manna

SK Manna

Yesterday, we reported that Sunil Kumar Manna, the head of immunology at India’s Centre for DNA Fingerprinting and Diagnostics, had retracted two papers for image problems.

Turns out Manna will be retracting eight more, he told us today. Here they are:

From the Journal of Biological Chemistry:

From Breast Cancer Research and Treatment:

From the Journal of Cellular Biochemistry:

From Apoptosis:

From the Journal of Clinical Immunology:

From Clinical Cancer Research:

The corresponding author of that last paper — whose NIH grant partially funded the work — is Bharat Aggarwal. Aggarwal, as we reported last year, is a highly cited researcher being investigated by his institution, MD Anderson in Houston.

About these ads

13 Responses

Subscribe to comments with RSS.

  1. I think that there should exist somewhere a secret competition in which some scientists, selected among the more creative ones, are invited to beat the world record of retraction! This is the only explanation for all of these retractions! :)


    March 5, 2013 at 10:47 am

  2. Do we know who screwed up here? Is Dr. Manna courageously taking responsibility for things that happened on his watch, or is he admitting that he himself made errors (in fact and/or in judgement)?


    March 5, 2013 at 12:16 pm

    • Haven’t looked in detail at all of the figures from the papers to be retracted. However, why is it that only “representative” blots are shown? I would think that a better approach would be to repeat these experiments multiple times and then include a blot that best represents the quantified data. Perhaps I am just being naive, but wouldn’t this mitigate many of the issues seen with image manipulation?


      March 5, 2013 at 12:25 pm

      • If I understand your question, such a blot sometimes can’t be created without some kind of inappropriate manipulation, either of the image or of the sample loading. For example, let’s say you do the experiment 3 times and the induction of a certain protein is 2x, 3x and 4x over background. The average is 3x and you can use the middle experiment as your figure. But suppose instead the inductions are 1.25x, 4x and 4x. The average is 3x (with rather large error bars, but hey, sometimes that’s science) but there is no 3x blot to show.


        March 5, 2013 at 12:43 pm

        • Then I would say….and this is how we do it my lab….that you present the quantified data along with the best representative blot. In regards to the error bar issue (at least if we are dealing with SE and not SD) then do it more than 3 times. Our work is physiology based and we typically have N’s of 8-12. Why you don’t see this in cell/molecular biology that much I have no idea. I feel a rant coming on so time for me to sign off. Thanks


          March 5, 2013 at 12:53 pm

          • Further to my previous point. If someone shows a blot from an experiment with no quantified data or measure of variability I have a hard time buying the result, no matter how nice the blot. Should you not be required to demonstrate repeatability. How are we to know that this experiment wasn’t run 10 times and the data presented was the only experiment that “worked”. Perhaps I’m being a touch jaded. Would like to hear others viewpoints though.


            March 5, 2013 at 12:58 pm

          • “Then I would say….and this is how we do it my lab….that you present the quantified data along with the best representative blot.”

            I think this is what most people try to do. The problems seem to be some combination of,
            (1) someone doesn’t like the blot (dirty, bubbles, not quite right) and rather than re-run the samples, pretties the blot, or
            (2) the whole thing is made up.

            #2 I can understand better. You spend years developing and testing a hypothesis, raising grants, writing papers and a thesis, you are convinced you are right, but the data never seems to line up quite right. You talk yourself into blaming random glitches in the experiments rather than faulting the hypothesis, and clean up the data to show that your hypothesis is right. It’s wrong, of course, but I understand it.

            I (personally speaking) do not understand why someone would risk their career over #1. Just re-run the experiment or show the dirty blot.

            As far as *showing* repeatability, science relies on trust. I trust that your bar graph ± SD accurately reflects 10 real measurements you made, and you trust me that my “representative blot” out of 5 independent experiments really is representative and that I really did it 5 times. I can understand why you might want to see all 5 blots (as a supplement or on an archive web site) but then maybe you need to be prepared to show me all 10 measurements.

            I don’t know that there is any way to guarantee fraud prevention and the more time I have to spend documenting yesterday’s work, the less new work I will get done tomorrow. So is there a right answer?


            March 5, 2013 at 1:12 pm

          • HI,
            In Physiology experiments where you have easier and cheaper means of quantifying data, in molecular biology and cell biology cost and time is a constraint. Many times the experiments are tool long and the techniques are such that quantification is difficult and scientist tend to avoid SE or SD. However, in such cases scientist perform other corroborative experiments and prove the same phenomenon by many differnt kind of experiments. Say Oleandrin kills tumor cells. You can perform many different kind of experiments such as microscopy of treated cells to look for apoptosis features, or western blotting to detect apoptosis markers or dye inclusion tests such as MTT to look for cell viability or preparing simple growth curves by counting the cells. Hence, if we just want to deduce that Oleandrin kills cancer cells, we can infact do that. But, problem arises where we want to know the extent of cell death by specific dose of Oleandrin treatment.
            Quantification in Molecular Biology/ Cell biology is always useful and provides more extensive information however, many scientist who perform microscopy tend to provide the ‘best photograph’ or a ‘representative blot’ instead of actually quantifying the things.


            March 6, 2013 at 1:10 pm

  3. Strong Dreams. Excellent points …especially the last two. Some one (not me!) needs to write a methods paper/editorial piece on what is acceptable methodology for blotting.


    March 5, 2013 at 1:16 pm

    • In reply to StrongDreams March 5, 2013 at 1:12 pm

      “I don’t know that there is any way to guarantee fraud prevention and the more time I have to spend documenting yesterday’s work, the less new work I will get done tomorrow.”

      No criticism meant. I am sure you do this anyway. Document as you go. For example, keep hardback lab books, and keep your results. Autorads with enough detail written on them (date, probe, washings) at the time and stored in those old box files. They do not go off. You can use punches to make holes near the edge and store them as pages in a real file. I do not see the trade off, or extra time.

      fernando pessoa

      March 5, 2013 at 6:14 pm

      • Notebooks obviously. I guess I was thinking about the additional requirements if I had to prepare an online archive of my experiments to prove to reviewers or readers of my papers that I actually every replicate of every experiment that I claim I did.


        March 5, 2013 at 9:03 pm

        • You might be interested in the idea of an electronic labbook – of which there are various implementations.

          These guys are not just having electronic labbooks, they are having it open access


          Personally I would find it a bit irritating to scan every single thing – which is what they say they are doing, they claim they have no other lab books – and I would miss the doodling and scrawling of plans or experiments as opposed to a blog post. But talk to the people who are doing this and they are fanatical about this approach.

          Who was Professor Platypus by the way? Is the example fair?

          little grey rabbit

          March 6, 2013 at 8:52 am

  4. @StrongDreams: “Is Dr. Manna courageously taking responsibility for things that happened on his watch, or is he admitting that he himself made errors (in fact and/or in judgement)?” – hope there is no hidden agenda here. is this his way of keeping his job intact? Only people from his institute know. Is he taking responsibility of other papers from his previous work?

    Ressci Integrity

    March 6, 2013 at 7:57 am

We welcome comments. Please read our comments policy at http://retractionwatch.wordpress.com/the-retraction-watch-faq/ and leave your comment below.

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


Get every new post delivered to your Inbox.

Join 35,968 other followers

%d bloggers like this: