Anil Potti and colleagues retract ninth paper, this one in JCO

Former Duke oncology researcher Anil Potti has retracted another paper, marking his ninth withdrawal. The notice in the Journal of Clinical Oncology (JCO) reads:

“An Integrated Genomic-Based Approach to Individualized Treatment of Patients With Advanced-Stage Ovarian Cancer” by Holly K. Dressman, Andrew Berchuck, Gina Chan, Jun Zhai, Andrea Bild, Robyn Sayer, Janiel Cragun, Jennifer Clarke, Regina S. Whitaker, LiHua Li, Jonathan Gray, Jeffrey Marks, Geoffrey S. Ginsburg, Anil Potti, Mike West, Joseph R. Nevins, and Johnathan M. Lancaster (J Clin Oncol 25:517-525, 2007)

The majority of the authors wish to retract this article because they have identified several instances of misalignment of genomic and clinical outcome data. Although a reanalysis of correctly aligned data still demonstrated a capacity to predict patient response to platinum-based therapy, the accuracy of these predictions has been reduced from 77.8% to 72.2%, and as a result, the original conclusions have been compromised. The authors deeply regret the impact of this action on the work of other investigators.

The following authors agreed with this retraction decision: Andrew Berchuck, Gina Chan, Janiel Cragun, Holly K. Dressman, Geoffrey S. Ginsburg, Jonathan Gray, Johnathan M. Lancaster, Jeffrey Marks, Joseph R. Nevins, Anil Potti, Mike West, and Regina S. Whitaker.

The following authors disagreed with this retraction decision: Andrea Bild, Jennifer Clarke, LiHua Li, and Jun Zhai.

The following author could not be reached for comment: Robyn Sayer.

This article was retracted on January 27, 2012.

The paper has been cited 114 times, according to Thomson Scientific’s Web of Knowledge. The retraction was first reported by The Cancer Letter.

The retraction, one of about a dozen that Duke says are expected, along with another dozen or so partial retractions, is the second one from the JCO for the group. At the time of the first, in November 2010, the journal told us:

Prior to retracting any paper, JCO must receive a signed statement from each author saying that he or she agrees that the article should be retracted and that the wording of the retraction is satisfactory to him or her.

We’ve tried reaching a few of the authors who didn’t agree to the retraction, and will update with anything we hear back.

hat tip: Steve McKinney

38 thoughts on “Anil Potti and colleagues retract ninth paper, this one in JCO”

  1. Making up data on papers with clinical implications should be treated as a crime against humanity, in my opinion.

    1. Could not agree more. And yet this waste of O2 is not only spared a prison sentence but is allowed to practice medicine in South Carolina!

    2. The piece says nothing about making up data, and Potti has never been accused of that. So such comments are not accurate, and may be considered libelous. What he did do, and what has been clearly documented, is mishandle data, in that his group (and we do not know WHO SPECIFICALLY in the group) perfomed data manipulation which “misalignment of genomic and clinical outcome data”. That is, probably the famous “one-off” error as documented clearly in Baggerly and Coombes.

      1. Whaaaaat!!!!
        Misalignment of genomic and clinical data = falsification of data to fit the hypothesis = making up data
        How difficult is it to see it. It is the same like mislabeling the bands on a Western Blot. The blot is real, the bands are real, but all what I did was change the label. It is not fraud or falsification, rather it is “just” mislabeling. My bad.

      2. You are totally wrong. Mishandling of data is not “making up data”. It is, possibly, making up relationships between data.

      3. Usually publication list comprises of published articles/reviews in juornals and books – we don’t include conference presentations to jack up publication numbers. If you are the real Paul Thompson you will understand…if you include conference proceedings in your publication list that is “misleading”. My opinion.

    3. Retractions of papers with significant errors or deliberate falsification should be considered a job well done. One could regard a retraction as something to be commended and rewarded, because at least those authors recognise the importance and value of correcting the literature. One possible concern is that more severe punishment and condemnation will not necessarily deter those who deliberately mislead, but will scare off those who would wish to do the right thing. Capital punishment has never worked.

      1. Of course it’s true that retracting papers that are false in significant ways is comendable, ‘though this is rarely done without pressure from other scientists, or editors or host institution. Retracting papers that result from errors made in good faith, is comendable, but nothing less is expected from a serious scientist.

        It’s inevitable that retractions due to falsifications will result in further punishment. If the falsifications are serious one is likely to suffer some of: losing one’s funding; being banned from further grant applications for some period; losing one’s job; losing the respect of one’s peers…

        Yes capital punishment doesn’t work (at least not as a deterrent). But I can’t think of any science cheat who has been executed for their misdemeanours! In sane societies punishments are appropriate for the “crime”. Some punishments (losing the respect of one’s peers) are unformalised but serious nevertheless.

        Of course the vast majority of scientists don’t need deterrents in order to work and publish honestly!

    1. Mirriam-Webster defines robbery: “the act or practice of robbing; specifically : larceny from the person or presence of another by violence or threat,” where larceny is the act of theft. So, if I take a bag of Doritos from a store without paying, that’s theft. If I hold the cashier at gunpoint while taking the Doritos, that’s robbery. Because robbery is theft+violence, it is always more serious.

  2. Paul Thompson said “The piece says nothing about making up data, and Potti has never been accused of that.”

    I am surprised at this interpretation because the implication has always been that there has been fraud. Why would someone be forced out of their faculty position for a genuine mistake? When there is a long string of “mishandled data”, it has to arouse suspicion in anyone of average intelligence (and gullibility!).

    1. Have you read anything about the case? For instance, Baggerley and Coombes 2009 goes over several of the papers, documenting in close detail the following errors:

      1) Errors in the composition of a test set of cases, in which numerous repeats of subjects are included, some with one label, some with another. Making up data? Fraud? Errors of simple carelessness? B&C do not place a label, merely note the fact.

      2) Errors of the labeling of microarray rows, in which the “one-off” or missynchronization of rows seems to be the issue. This is not “making up data”, but mishandling data.

      3) Errors of the labeling of conditions, in which condition X is called “resistant” at one point and “sensitive” at another point.

      The issue is not that serious, manifold, frequent, repeated errors occurred, errors which were extremely serious. There is no suggestion in any of the Potti discussions that they “made up data”, in that they filled in values in a spreadsheet that were not there.

      B&C state: “In particular, they illustrate that the most common problems are simple: e.g., confounding in the experimental design (all TET before all FEC), mixing up the gene labels (off-by-one errors), mixing up the group labels (sensitive/resistant); most of these mixups involve simple switches or offsets. These mistakes are easy to make, particularly if working with Excel or if working with 0/1 labels instead of names (as with binreg).” No “making up data”.

      Potti lost his job and may go to jail for a lot of reasons, and he will never again be in the position to run a large ticket cancer research lab. I don’t think it was because he “made up data”. It was for many good reasons, but that is not one.

      B&C also examined the basic idea that Potti’s group used. Using the data correctly, and performing the analyses without making the errors that Potti made, they concluded:
      “A broader question is whether this approach could work if applied correctly. We don’t think so. We have tried making predictions from the NCI60 cell lines when we step through the process without the errors noted above, and we get results no better than chance.” So, Potti used an ineffective method, and did it wrong.

      However, read the Baggerley and Coombes piece yourself. All scientists should read the paper.

      Keith A. Baggerly and Kevin R. Coombes (2009) DERIVING CHEMOSENSITIVITY FROM CELL LINES: FORENSIC BIOINFORMATICS AND REPRODUCIBLE RESEARCH IN HIGH-THROUGHPUT BIOLOGY. Annals of Applied Statistics, 2009, 3, 1309-1334

      1. I have followed the case – and with a somewhat jaundiced eye (more than you perhaps).

        You have chosen to interpret “made up the data” literally – as in creating data when there are none.

        I look at it differently. To me, “made up the data” includes omitting relevant data, creating fictitious data, mis-aligning rows/columns intentionally, and other sorts of manipulations.

        Isn’t it interesting that all these “innocuous” mistakes have been spread across several papers – and have alll resulted in significant correlations emerging where there are perhaps none? I wonder why the simple mistakes did not result in significant correlations being hidden or being reported as non-significant. true random errors should not favor one phenomenon.

        Philip Herren could say ask eloquently “why is this lying bastard lying to me?” – but unfortunately, in analytical journal articles, one has to pussyfoot. However, I draw the line at calling what seems to be fraud a mistake.

      2. @Jayesh Mehta:

        You bring up a good point. I’m not sure that we’ve seen the entire story, though. Published data is usually success-biased. It’s possible that Potti’s lab was filled with these kind of errors, which led to as many negative results as positive results. To our eye, all the results are positive because only the positive results were published. Of course, it’s also possible that Potti’s lab pressured everyone to come up with positive results, even if they had to “fudge” the alignment to do so. I don’t think that Retraction Watch has enough data to distinguish the two, although I haven’t followed the Potti disaster very closely.

      3. “Make up data” means a specific thing. “Make up” means “create artificially”. “Data” are values.

        The missalignment and mislabeling errors are not that sort. One might argue that the composition of the “training set” with its duplicated observations involves “make up data”. I prefer a more accurate term.

        I have no idea if he intentionally or carelessly made the mistakes he made, but none of these other mistakes involves “making up data”. It’s really important to be clear and accurate. They made 3-4 distinct mistakes, and carelessly and glibly confusing one mistake for another merely muddies the water. What is the value of mis-stating the facts as clearly described in Baggerly and Coombes? If you have read that, which you appear to allege, you would not make the glib statement that they made up data.

        Accurate statements of what they did are important. By mis-stating and unclearly describing what they did, others might continue careless errors of the sort that Potti’s lab committed. We still do not know who specifically made errors, but by describing the errors correctly (which I am trying to do), we can define corrective measures which make such errors less possible.

      4. @paul thompson: Please read my comment above (February 8, 2012 at 10:29 am). If someone has only 200 publications and few hundred conference abstracts – he/she writes 700 as total number of publications in his cv. What is this called?

      5. Ressci integrity: I have no idea what it is called. In this circumstance, it is called “irrelevant” and possibly “confusing one issue with another”. What specifically and precisely is your point with these hypothetical CV issues?

      6. I guess the point is that if someone has been found to deliberately lie / manufacture aspects of their resume (e.g. by inflating publication numbers, falsely claiming to be a Rhodes Scholar) it is common sense (and human nature?) to question the integrity of other aspects of their work. Especially if these other aspects are littered with discrepancies. Far more than 3-4 mistakes were made in this situation.

      7. You wrote to Schmuck: Why don’t you post with your actual name? You are making up data with a name like that. I thought i would check whether you use your actual name.

  3. I suppose claiming to be a Rhodes Scholar was a careless mistake too…

    Clearly, some people believe there was deliberate fraud – and some don’t. Let’s leave it at that.

    1. I’d agree that Anil Potti is definitely a narcissistic schmuck, but he’s not the only author on the papers. Given that so many reputations are at stake, I think it’s wrong to say that all of the mistakes are deliberate fraud until there’s proof of the fraud.

      1. I agree with that. Someone screwed up, but in today’s environment, who knows who? I do know that I asked a person at Duke a question, and the reaction was strong and not positive. Duke has lawyered up, Duke is getting sued, and no one is gonna volunteer anything. I really wonder if we will ever know important matters about this case.

        Note that on Sunday, 60 Min will have a feature on this case.

      2. The question is, why are you outing people and discussing their CVs on this list, and making a idiotic judgement about it? Pretty presumptuous of someone who is hiding behind a nom de post. What possible right do you have to make any comments on anyone’s CV, and on their choices, one way or the other? You are not a judge or have any right to make such comments. Post your name!

      3. Agreed. I am not a judge and every one has their own way what to include and not include in their cv. The case being discussed on this post was also about the fact he has indicated in his cv (Rhodes Scholar). As some one mentioned above, may be it was a careless mistake. Who knows…The isssue of anonymity has aleady been discussed many times on this blog. Again apologies.

    1. That settles nothing. 60 Min is a news program. They gloss over stuff. The term “data manipulation” is a huge area. Nevins again is a physician. He doesn’t know bioinfomatics, biostatistics, or other stuff. I sent a note to Baggerly, and asked him if he believes that the term “data manipulation” is accurate. He said that he was not sure, but that systematic, consistent, and always favorable data errors occurred.

      It’s a mistake to conclude anything from a news program. If they had a news program saying that some person in prison is innocent, are they really innocent? Who can say? They gloss over a bunch on TV.

      1. I think that if Nevins said it himself, then that says something. By your logic, Potti is also a physician, so he shouldn’t be presenting evidence about this type of research.

        We don’t really know what Baggerly thinks because the show didn’t ask him (or perhaps they did, but they didn’t show it). What it may mean is that Baggerly is a bit more apprehensive about characterizing this type of thing as manipulation whereas Nevins isn’t. The latter being far closer to the situation than the former.

        Dr Califf also said that there was ‘an asteroid’s chance of hitting earth’ possibility that the data were incorrectly switched to gain the positive result.

      2. 1. 60 Minutes glossed over stuff.

        2. Nevins is a physician.

        And what Paul Thompson says is right.

        I have to say I find PT’s ongoing posts (fairly aggressive and stridentl too!) trying to label the Potti Affair a series of mistakes somewhat peculiar.

  4. This was serial fraud. In turn they received millions of dollars in funding and Duke received countless millions in support of a new Cancer Center and other related initiatives. A Ponzi scheme in essence. Sadly, Nevins was probably the biggest beneficiary. This was about money and fame – and has cast a renewed doubt about ethics in the scientific community in general. It also took money away from ethical investigators doing good science.

    1. @PJ: it has always been the case when a high profile case appears. several careers have cut short because of lack of funding…

    2. Events like these, particularly in a highly respected medical institution, bring negative thoughts into human mind – TRUST. Medical Profession supposed to be the most trustworthy. Key player in this episode is from my home state (A.P India) and attended well respected High School. I watched the 60 minutes and was surprised to hear the reporter mispronoucing his last name. It should ( I think) pronounced like “Poland” – “Potti”. That word “Potti”in Telugu language means short

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.