Authors retract prostate cancer-grape seed compound paper for figure presentation error

University of Alabama researchers have retracted a paper claiming that a grape skin seed compound might have anti-prostate cancer effects.

Here’s the notice for “Proanthocyanidins from grape seeds inhibit expression of matrix metalloproteinases in human prostate carcinoma cells, which is associated with the inhibition of activation of MAPK and NFκB”:

Due to an error of the presentation of data in figure 2 of this article, the authors have requested it be retracted from the journal.

The abstract, as originally published online, read: Prostate cancer (PCA) is the second most frequently diagnosed and leading cause of cancer-related deaths in men in the USA. The recognition that matrix metalloproteinases (MMPs) facilitate tumor cell invasion and metastasis of PCA has led to the development of MMP inhibitors as cancer therapeutic agents. As part of our efforts to develop newer and effective chemopreventive agents for PCA, we evaluated the effect of proanthocyanidins from grape seeds (GSP) on metastasis-specific MMP-2 and -9 in human prostate carcinoma DU145 cells by employing western blot and gelatinolytic zymography. Treatment of GSP dose-dependently inhibited cell proliferation (15–100% by 5–80 mg/ml of GSP), viability (30–80% by 20–80 mg/ml of GSP) and fibroblast conditioned medium (FCM)-induced expression of MMP-2 and -9 in DU145 cells. Since the signaling cascade of mitogen-activated protein kinases (MAPK) have been shown to regulate the expression of MMPs in tumor cells, we found that the treatment of DU145 cells with GSP (20–80 mg/ml) resulted in marked inhibition of FCM-induced phosphorylation of extracellular signal regulated kinase (ERK)1/2 and p38 but had little effect on c-Jun N-terminal kinase under similar experimental conditions. GSP treatment (20–80 mg/ml) to DU145 cells also dose-dependently inhibited FCM-induced activation of NFkB concomitantly with inhibition of MMP-2 and -9 expression in the same system. Additionally, the treatment of inhibitors of MEK (PD98059) and p38 (SB203580) to DU145 cells resulted in the reduction of FCM-induced phosphorylation of ERK1/2 and p38 concomitantly marked reduction in MMP-2 and -9 expressions. In further studies, treatment of androgen-sensitive LNCaP cells with a synthetic androgen R1881, resulted in an increase of MMP-2 and -9, which were completely abrogated in the presence of GSP (20–60 mg/ml). These data suggest that inhibition of metastasis-specific MMPs in tumor cells by GSP is associated with the inhibition of activation of MAPK and NFkB pathways, and thus provides the molecular basis for the development of GSP as a novel chemopreventive agent for both androgen-sensitive and -insensitive prostate cancer therapies.

Curtis C. Harris

Editor-in-Chief

The paper has been cited 69 times, according to Thomson Scientific’s Web of Knowledge.

Oxford University Press (OUP), the publishers of Carcinogenesis, tells Retraction Watch that a reader alerted Harris to potential issues with Figure 2 of the paper. Harris and two other scientists looked at the evidence, and decided there was enough to ask the authors to explain what looked like potential image manipulation. The journal then followed Committee on Publication Ethics (COPE) recommendations and contacted the corresponding author, Santosh Katiyar of the University of Alabama.

Katiyar

…responded in reasonable detail agreeing that there had been ‘an error in presentation of data [in figure 2]’ and requested that the paper be retracted. Carcinogenesis retracted the paper.

The publisher

felt the answer was satisfactory and the Journal did not contact the author’s institution to request any further investigation.

OUP said it is not aware of any investigation by the University of Alabama. We’ve contacted Katiyar for comment, and will update with anything we hear back.

27 thoughts on “Authors retract prostate cancer-grape seed compound paper for figure presentation error”

  1. The gels in question clearly involved fabrication (there are virtually as many splice artefacts as lanes in Figure 2a panel 2, and Figure 4 also has an obvious splice artefact), and so the authors presumably hope to avoid a misconduct investigation by retracting the paper ‘on the quiet’.

    The editors did indeed follow COPE recommendations, but failed completely to follow common sense, which is to consider if other papers in their own journal also contain fabricated images. A 1-minute PubMed search finds:

    Carcinogenesis. 2003 May;24(5):927-36.
    Treatment of green tea polyphenols in hydrophilic cream prevents UVB-induced oxidation of lipids and proteins, depletion of antioxidant enzymes and phosphorylation of MAPK proteins in SKH-1 hairless mouse skin.
    Vayalil PK, Elmets CA, Katiyar SK.

    http://carcin.oxfordjournals.org/content/24/5/927.full.pdf

    Figure 3C and Figure 4C also contain obvious splice artefacts.

    I also suggest people look at the error bars in the histograms across these two papers. These appear to be a fixed proportion of the signal in every case – too good to be true – this does not happen in normal science.

    Katiyar has at least 200 papers… no wonder he wanted a quiet retraction…

    1. Replying to myself (are you all asleep out there?) – 10 minutes more searching reveals:

      Fig. 1A
      http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3258217/pdf/1472-6882-11-134.pdf

      claims to show the A431 line (according to Wikipedia this was derived from an epidermoid carcinoma in the vulva of an 85 year old female patient).

      But the SAME IMAGE appears in
      Fig. 1A
      http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3267770/pdf/pone.0031093.pdf

      where it is referred to as FaDu (phayrngeal).

      So image manipulation in 2004. Image re-use in 2011. Over 200 papers – the scale of fraud here could potentially be massive… it would be great if someone else could contribute to the list of papers involved (now standing at 4…)

    2. Good call on the error bars (SD’s as well…..) – ridiculously uniform throughout. This guy is funded up the kazoo as well….

      1. The re-use of images with different cell source attribution as pointed out by amw here is obviously egregious, and requires further follow-up.

        The issue of error-bars requires more discussion as uniform error bars in and of themselves do not necessarily indicate dodgy statistics. Uniform error bars will result in a balanced experiment of several factors which exhibit similar degrees of variability which are analyzed together using a single statistical model, such as an analysis of variance (ANOVA).

        Indeed the PLoS ONE paper states

        “For statistical analysis of cell invasion assays, the control, gefitinib, erlotinib or GSPs treatment groups or combined treatment groups separately were compared using one-way analysis of variance (ANOVA) followed by the post hoc Dunn’s test”

        which in English means that several pairs of treatments and / or the control were compared, and Dunn’s method for adjusting for multiple comparisons was used. (I see no discussion of the details such as how many comparisons they made so readers (especially statistically-savvy ones) can assess whether Dunn’s method was carried out reasonably for these data, but this is a minor quibble relative to those raised about the image issues.)

        Indeed I would argue that more papers should present results analyzed with better statistical techniques such as ANOVA instead of the endless sets of unpaired and paired t-tests that I all too often see from groups that do not consult with statisticians about appropriate analysis methods.

        One thing that Katiyar et al. do get right from a statistical point of view is that they do not put an error bar on the control bar when the analysis has been adapted to show differences with the control level “normalized” to be 1.0 or 100% (such as in Fig. 4B and 4D of the PLoS ONE paper cited) – this is a common problem in many papers that indicates to me that data have been poorly analyzed (not generally due to any wish to deceive – just due to ignorance of statistical issues).

        Poorly analyzed data does not indicate malicious intent, but does nevertheless leave the literature littered with excessive false positive and false negative conclusions, as has been well documented by researchers such as John Ioannidis.

        amw states above “I also suggest people look at the error bars in the histograms across these two papers. These appear to be a fixed proportion of the signal in every case – too good to be true – this does not happen in normal science.” Indeed it has become normal for non-statisticians to push data through simple t-test software such as that found in Excel, and this is a bad state of affairs. Normal science should involve the proper handling, analysis and interpretation of statistical findings.

        Once all the poor researchers featured on the pages of Retraction Watch are weeded out for good, and remaining researchers consult with statisticians to get more efficient and accurate experimental design and analysis methods into papers, you will see more and more plots with uniform error bars, so focus here on the obvious image problems rather than the statistics. Data will either have to be obtained from Katiyar and colleagues, or re-created from data tables and figures, to assess whether the statistics were manipulated or misrepresented, as the images obviously were.

    3. amw, carcinogenesis clearly takes reader concerns serious, so why don’t you contact them about this? It’s outside my area of expertise, so I cannot judge the validity of your claims, but carcinogenesis clearly has some experts available.

      1. Marco: well, carcinogenesis may take reader’s concerns seriously but their editorial policy is not that straightforward though. I had first hand experience with them – they don’t even send some manuscripts for review saying that the work is not up to the mark to Carcinogenesis – look now you see what has happened.

      2. Not sending manuscripts for review is not uncommon. Many journals do so if the Editor after skimming the paper considers it highly unlikely it fits in the journal or would pass peer review. I’ve had it happen in several journals. Nature and Science have a pre-screening process, for example.

      3. i can understand with nature and science but not journals like carcinogenesis range…everyone is getting a bit stingy..these days..this is what i call biased approach

      4. Ressci, why are you whining about things that save you time and effort? Since the editor is the one to make the final decision anyhow, isn’t it better to get rejected immediately instead of going through a lengthy review process?

      5. Agreed. It is good to know the decisions sooner. However, when i see that similar papers are getting published in the same journal – feel bad about the biased approach..that is all. no whining any more.

  2. In response to chirality August 27, 2012 at 9:58
    I believe that these small errors are to test if we are awake.

    Such cases show that what sounds an innocuous retraction statement about one figure may have more to it.
    An index of retractions in a field divided by number of publications in a field might be useful.
    Recently I read on this blog about the Getzenberg retractions which were also about the prostate cancer.
    I know that some will say that by letting people know about something will make people hide things more, but one can always say that.

  3. More examples of image reuse
    http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3192733/pdf/pone.0025224.pdf
    Fig. 5, duplicated images in panel D

    http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3145779/pdf/pone.0023000.pdf
    Fig. 5, panel E, bottom right of second image (silymarin 10µg / ml) is same as top-left of fourth image (silymarin 40µg / ml)

    http://www.ncbi.nlm.nih.gov/pubmed/21225228
    Fig. 2A, 3rd image on top row is same image as 3rd image on bottom row (although shifted along slightly) – but paper claims these are different lines.

    http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3066413/pdf/bgq282.pdf
    Panel reuse across completely different conditions – the panel at the bottom right of 5B appears a total of 4 times!

    This last one is a Carcinogenesis 2011 paper.

    Conclusion: it took about hour’s superficial skim-reading to find 8 papers containing clear image fraud… goodness knows what else is out there from this group. They are a factory….

    1. Wow, just wow! It just shows that one has to fight the very first retraction tooth and nail. A failure to prevent it draws a lot of unwanted attention.

  4. So, perhaps another practitioner of serial fill-in-the-blanks papers entitled: [family of chemicals] from [plant] reduces the expression of [gene/symptom] in [animal] [organ] [disease]. This really calls for the re-introduction of an old game called Mad Libs. The idea was to go around the room with the appointed scribe calling out the generic terms in brackets, without revealing the context, then reading the result. Someone really ought to do this for the titles and abstracts of typical papers in several fields which pop up in RW with unreasonable frequency. For example, a Stapel Mad Lib:

    “[Corny aphorism]”: The effect of feelings of [mood] and inadequate [personal characteristic] on attitudes toward [activity] and [group of people] social dynamics.

    1. Agreed:
      Aggarwal (MD Anderson Center, Houston)
      Das (UConn)

      are already known to have committed fraud on a massive scale involving research into natural products.

      Katiyar (Alabama) now looks like being in the same category.

      It is reasonable to wonder if this is actually a network of fraud given the similarity in subject matter. These three lab heads are likely at some stage to have reviewed each other’s papers and more importantly their grants.

      Regarding writing to the journals, as people have posted before, no one wants to be the whistleblower. It’s partly the career risk. And maybe more practically, for sanity – these are complex sagas which don’t always end the way one imagines they will. Doing it anonymously doesn’t work either. Having said that, I am an Academic Editor for one of the journals the Katiyar lab has published in and part of me feels that the journals is devalued by what this lab has done. Food for thought…

      It would also help if RW could follow up such ‘unsorted’ cases with the journals involved (plus the Roman-Gomez / Gazdar lab massive fraud case which opened up a couple of months ago).

      1. AMW, first I want to commend you on your sharp eyes. You caught some patterns I don’t think I would ever notice! Second, I think you’ve raised an important point – a point that deserves more attention by this community. Being the whistle blower has serious repercussions associated. Should junior scientists who find fraud in the literature report it? (I don’t mean fraud you see your advisor committing, but fraud that doesn’t involve you at all.) It seems to me that it carries too much risk. Could those of you that have reported suspected incidences of fraud tell us about the potential fall out??

  5. @amw, speaking of Aggarwal, he just had a correction in PLoS One…
    http://www.plosone.org/annotation/listThread.action?root=52835

    What’s odd is that this was not on the list of papers reported as suspicious by 11Jigen or anyone else. What’s even more incredulous, is the explanation offered in the correction/comment:

    The protein extracts derived from cells treated with different concentrations of AKBA (0, 10, 25 and 50 µmol/L) were analyzed by gel electrophoresis. Each sample was electrophoresed in triplicate. At the end of electrophoresis, the proteins were transferred onto the nitrocellulose membrane and membrane was sliced vertically into three pieces:
    -Slice # 1: The membrane was sliced horizontally into two pieces; one probed for Bcl-2 and the other for survivin. The blot for Bcl-2 was stripped and reprobed for Bcl-xL.
    -Slice # 2: The membrane was probed for c-Myc, then stripped and reprobed first for cyclin D1, and then with COX-2.
    -Slice # 3: The membrane was sliced horizontally into two pieces, one was probed for MMP9 and the other for CXCR4. The latter was stripped and reprobed for *-actin.
    Since all the samples were run in the same gel, only one *-actin was used.

    This is not credible at all! If blots originated from the same gel, as claimed, then the approximate sizes/shapes/slopes of the bands should all be the same. This is clearly not the case for any of the 3 grouped gels that Aggarwal is talking about. For example, in the Bcl2/BclXL/Survivin gel, the 2 bands on the right slope upwards in the Bcl2 gel, but downwards in the survivin gel. It is a downright lie to claim these things originated on the same gel! Same goes for the other groups of gels – spaces in between bands are uneven.

    Who in the hell do these people think they are trying to fool?

  6. Somebody might like to take a closer look at the work coming out of the Department of Radiological Sciences, Graduate School of Medicine and Pharmaceutical Sciences, University of Toyama, Japan. I can’t access some of the papers (behind pay walls) but I will start the ball rolling with a panel duplication (Figure 8, control 1h and 3h):
    http://www.ncbi.nlm.nih.gov/pubmed/22311471

    Figure recycling (Figures 3 in both papers, control and HT in both papers – but notice the nice crops in the 2008 paper):
    http://www.ncbi.nlm.nih.gov/pubmed/17458712
    http://www.ncbi.nlm.nih.gov/pubmed/18224486

    Figure recycling with contrast adjustment and possible splices (Figure 5 in both papers) – the first four lanes look identical, while the last two look different – look at the distinctive first lane bcl2 as well as the cytochrome c SB lanes (bottom panel):

    http://www.ncbi.nlm.nih.gov/pubmed/18630528
    (note that figure 4 in this paper also contains a good old splice!)

    http://www.ncbi.nlm.nih.gov/pubmed/18636204

    I have a feeling there is a lot more out there.

    1. That’s very well documented and a good resource for doing something about this… very hard to ignore when laid out in this manner. Apparently the University of Alabama Dean is unresponsive, but that may change.

      I also see that Katiyar’s lab authored an Erratum in PLoS One in June 2012 regarding a 2011 paper poaching an image from two other published papers (i.e. adding three additional papers to the list…).

      The correction (http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3368969/) reads:

      ‘In the published article, the beta-actin blot under Figure 2A (cytosolic) was duplicated unknowingly by error with other published papers (Figure 6B; Sharma et al., Mol. Cancer Ther, 9(3); 569, 2010; and Fig. 2B; Sharma and Katiyar, Pharm Res., 27: 1092, 2010). The authors would like to apologize to readers and to the editors for this error.

      The experiments were repeated in A549 cells with and without treatment with grape seed proanthocyanidins (GSPs) under identical conditions. Cytosolic fractions were subjected to Western blot analysis, and new data was generated to confirm and verify equal protein loading on the gel using antibody against β-actin and to replace the duplicated one. The data obtained confirm the results originally reported in the article. The corrected figure is available here…’

      This is typical uselessness by the journal (in this case PLoS One). Firstly an Erratum is generally assumed to be a small error (typically in formatting of the manuscript by the journal). This at best should have been a Correction. Secondly, if this were an isolated case, one can accept it as a lapse in good practice and leave it at that. But as has become clear, this lab has at least three other papers in PLoS One showing image duplication.

      Finally, the notice does not make grammatical or scientific sense. An image cannot be ‘duplicated unknowingly by error with other published papers’. To me the journal just regurgitated a vague sentence from the authors without the issue getting past the brainstem of the journal editor.

      Pitiful.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.