Doctor suspended in UK after faking co-authors, data

Screen Shot 2016-02-24 at 11.04.56 AMA doctor in Manchester, UK has received a year’s suspension by the Medical Practitioners Tribunal Service.

Gemina Doolub admitted that she fabricated research data and submitted papers without the knowledge of her co-authors, including faking an email address for a co-author, a news story in the BMJ reports. The research in question was part of two retractions that Doolub received in 2013, one of which we covered at the time.

Doolub’s research examined ways to treat and avoid microvascular obstruction — that is, blocked arteries. Doolub did the work while at Oxford.

Intracoronary Adenosine versus Intravenous Adenosine during Primary PCI for ST-Elevation Myocardial Infarction: Which One Offers Better Outcomes in terms of Microvascular Obstruction?” was published in International Scholarly Research Notices Cardiology and has not yet been cited, according to Thomson Reuters Web of Science.

As the BMJ reports, in that paper,

she cited Erica Dall’Armellina, a clinical research fellow at the John Radcliffe Hospital, as a coauthor, fabricating an email address for her. Neither Dall’Armellina nor Oxford University had approved the paper.

In admissions made to the tribunal before the hearing, Doolub admitted fabricating research data for the paper and allowing flawed, erroneous, and confidential data to be published.

The retraction note, published in December 2013, confirms that account:

This article has been retracted upon the authors’ request, as it was found to include erroneous data that their findings and conclusions cannot be relied upon. Additionally, the article was submitted for publication by the author Gemina Doolub without the knowledge and approval of the other author Erica Dall’Armellina.

We covered the other retraction mentioned in the BMJ story — a conference abstract — which at the time seemed “like a good example of researchers doing the right thing.” However, the BMJ story reveals some extra information that was not mentioned in the retraction note.

In “Does Intracoronary Adenosine Injection During Primary PCI Reduce Microvascular Obstruction in Patients Admitted With STEMI?,” the BMJ reports, Doolub cited without permission,

Colin Forfar, a consultant cardiologist at the trust, as coauthor. Doolub admitted falsely stating that the study had been double blind, falsely naming Forfar as coauthor, and allowing the article to be published with flawed or erroneous data.

To refresh your memory, here is the 2013 note in full, published the the Journal of the American College of Cardiology:

This article has been retracted at the request of the author.

The prevalence of MVO was reduced in the adenosine-treated patients (45%) compared to 85% of control patients (P=0.0043). We found that the size of MVO in adenosine-treated patients was significantly reduced (0.35g) compared to 0.91 g in the control group (P=0.027). There was no statistically significant difference in TIMI flow and clinical outcomes after primary PCI.

Excel software was used to calculate the p-values. On recalculation using a newer version of the programme, the values are coming back different: Prevalence of MVO comparing adenosine to non-adenosine is now 0.15, therefore non-significant. Also the P-value for the mass of MVO in adenosine versus non-adenosine is 0.34, again non-significant.

According to the hearing chairman William Coppola, Doolub’s actions didn’t hurt patients, but were nonetheless concerning, the BMJ reports:

Even though the studies had posed little risk to patient health, being too small to change practice, “you nevertheless chose to place your interests before those of patients in general and the wider profession as a whole,” said Coppola.

The BMJ notes that Coppola apparently considered a harsher sentence, but took other factors into account:

Doolub, a native of Mauritius who qualified at Newcastle University in 2009, had shown insight and remorse, said Coppola, but had also continued to give false reasons for her actions as recently as last month, when she signed a witness statement which claimed that “I included Dr Dall’Armellina as coauthor of the study as I could not have completed the study without her assistance.” She later admitted that this was not true.

“The tribunal considered this to be a finely balanced case,” said Coppola, warning Doolub that her fate hovered between erasure and maximum suspension. Ultimately, he said, her frank testimony at the hearing and the supportive testimonials of more recent colleagues had told in her favour. Consultants with whom she was currently working, who knew of her misconduct, were willing to work with her again after any suspension, he added.

“The tribunal concluded that the mitigating factors set out above were just sufficient to indicate that the public interest could be met by a sanction of suspension,” he said. “In making this decision, the tribunal also took account of the public interest in keeping the services of a good doctor.”

Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our new daily digest. Click here to review our Comments Policy.

13 thoughts on “Doctor suspended in UK after faking co-authors, data”

  1. I wonder as to the “forces” that compels her to falsify her data and drag down the names of her two colleagues. Why take this totally unnecessay risk ? Is this a form of ‘Identity Theft’ ?

  2. Minor correction:
    I believe that International Scholarly Research Notices Cardiology was never indexed by Web of Science

  3. Is this what passes for “the public interest in keeping the services of a good doctor” now days in the UK? Sorry to hear that. They have been at the forefront of many issues, like the dangers of tobacco and asbestos. What happened?

    1. Having decided that this doctor’s fitness to practise medicine was impaired, the Panel were faced with a choice of placing (restrictive) conditions on her work, suspending her for up to 1 year or erasing her from the Medical Register. In making its choice, the Panel of three (with at least one physician and at least one lay person) had to balance the Public Interest with the doctor’s own interest and to decide whether or not she had shown insight into her wrongdoing and remedied her misconduct in the interim between its occurrence and the hearing. The Public Interest is defined as protecting the safety of patients, upholding proper standards of conduct and behavior and maintaining public confidence in the profession.
      By suspending her for a year, the effects for the doctor will be the loss of a year’s salary, the requirement to appear before a Review Panel in 11 months to prove she has gained insight, remedied her misconduct, kept up to date educationally and there is negligible risk of recurrence. If she is allowed to return to unrestricted practise (by no means guaranteed) it might prove hard to return back to a training programme (hotly contested) or other work in what in the UK is a virtually monopoly employer, the National Health Service.

      This seems to me quite a lot tougher than the somewhat anodyne agreements frequently announced between ORI and a perpetrator.

  4. That initial retraction doesn’t make sense. How could P-values change enormously when moving to a newer version of a program?

  5. When that program is Excel – Caveat Emptor. See e.g.

    Problems with numerical calculations in Excel are many and legendary.

    In the R statistical software package:

    > -3^2
    [1] -9

    In Excel put this expression in a cell


    and you will see


    That’s a spectacular failure. Mathematical operator precedence does not seem to be a high priority item in the coding departments at Microsoft.

    Scientists beware. Reliance on Excel may land you on one of these Retraction Watch pages – consult with a statistician who uses reliable software for better mileage.

  6. With Excel, it is simply not possible to coherently see exactly what they are doing. They are probably using some formula linked to some cell for calculations. If more rows were added later, they might have forgotten to extend the range of cell inclusion. Of course, with Excel, cells that are empty are considered to have 0 values, which may or may not be appropriate – for most programs, a blank is considered a missing value, not a 0.

    Use of Excel is only done by people who have never heard of the horrible inaccuracy for the program. As Steve McKinney notes above, there are fundamental math issues with Excel as well. Don’t use it.

  7. In this case we can even recalculate the p-value for prevalence of MVO with Fisher’s exact test using the data shown in the now-retracted abstract. If I understood the data correctly and didn’t mess up otherwise, I find p=0.009351. And even just looking at the data (9/20 MVO with adenosine treatment, 17/20 without), the difference “feels” significant. For the size of MVO there’s not enough data given to calculate the p-value, but the difference in given mean masses looks pretty big to not have a smaller p-value than 0.34. Excel has its problems, but I don’t think you can chalk all this up to Microsoft.

    1. Thanks for this note. What’s amazing is that this value is not even slightly close to either of the values that they had calculated. In addition to using a better program, the PERSON doing the calculation should have some idea what they are doing.

  8. There seems to be a very false economy that organisations don’t buy multiple licenses for Stata, which is both a powerful statistical program, relatively easy to use and only a few hundred dollars per copy in multiple copies. Alternatively this type of analysis is so simple that a good statistician should be able to do it quickly and would understand better the need for diagnostic checks for model adequacy. I tend to wonder how many of these analyses that are wrong but are never identified.

  9. I also have huge problems with Excel, it is not robust enough and has many bugs. I work with large datasets and when I turn filter on (for example sort p-values ) firstly works fine, but when I turn another filter on (for example sort OR from smallest to largerst) it confuses rows and the results for a given marker (in this case, SNP) are not right. The similar mistakes arise when I copy large data from one excel file to another. And there are also frequent crashes…Its driving me crazy.

  10. But a p going from P=0.027 to P=0.34 seems like quite a service pack update. It would be interesting to hear Microsoft’s take on this. It implies the software is a random number generator and the source does not seem totally credible.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.