A psychiatry journal has retracted a 2015 meta-analysis on the effectiveness of ketamine for depression after readers found that the article double-counted patients in some studies, thereby inflating the apparent benefits of the drug.
The article, “Efficacy of ketamine in bipolar depression: systematic review and meta-analysis,” was published in the Journal of Psychiatric Practice by a group from the United States and England. But a pair of researchers in Sweden noticed the duplication — and what seems to have been a rather slapdash approach to the work — and pushed to have the paper retracted.
According to the results section of the abstract:
Of the 721 articles that were screened, 5 studies that enrolled a total of 125 subjects with bipolar depression (mean age, 44.6±4.3 y and 65.6% females) were included in the systematic review; 3 randomized controlled trials (69 subjects) were included in the meta-analysis. The meta-analysis showed significant improvement in depression among patients receiving a single dose of intravenous ketamine compared with those who received placebo (SMD=-1.01; 95% confidence interval, -1.37, -0.66; P<0.0001).
But the retraction notice tells a different story.
The Editor received a letter from Joakim Ekstrand, PhD and Pouya Movahed, PhD, MD from Lund, Sweden, expressing concerns about an article in the November, 2015 issue of the Journal entitled “Efficacy of ketamine in bipolar depression: Systematic review and meta-analysis”. Drs. Ekstrand and Movahed pointed out that the Parsaik et al meta-analysis referred to 3 randomized controlled trials, in which a total of 69 patients were enrolled, yet careful review of these publications revealed that all but 3 of the participants in the third study were identical to the participants in the first and second studies. Hence the effect size reported was erroneous due to the incorrect number of participants reported. The Editor has conveyed this information to the first author of the 2015 publication, who replied that the error was unintended; he was informed that the paper would be retracted from the Journal.
We appreciate the diligence of the authors of the letter to the editor, and the Parsaik et al paper in the November, 2015 issue of the Journal of Psychiatric Practice is hereby retracted.
This isn’t the first retraction of a paper that looked at ketamine and depression. One such paper was the subject of this 2017 post, which reported on an investigation by Yale University.
The first author of the newly retracted meta-analysis, Ajai Parsaik, told us:
This was a systematic review and meta-analysis of all the available studies on “ketamine for bipolar depression.” In the systematic review, we included five studies. In the meta-analysis portion, we analyzed 3 available RCTs (Lally et al, Diazgranados et al, and the Zarate et al). Unfortunately, we were not sure from the manuscript by Lally et al whether the subjects were the same as included in their previous study. This was an inadvertent error which was missed by authors as well as during the critical and comprehensive review phase by the journal. It was an unintentional error; therefore, we decided to retract it (the decision was mutual by journal and authors). We did contact the authors of original studies including duplicate patients. We are currently working on redoing the whole paper which also includes any updated studies since then.
The paper has been cited 11 times, according to Clarivate Analytics’ Web of Knowledge.
Ekstrand gave us the backstory in an email:
We had a RCT comparing the efficacy of ECT [electroconvulsive therapy] and ketamine in severely depressed inpatient running and I was keeping an eye on the development of the field, and a bit annoyed in general about the trend to do meta-analyses and reviews instead of proper studies.
Since I was preparing our next study with ketamine for bipolar patients (power calculations and such) I had an overview of which studies had included patients with that particular diagnosis and knew exactly how few RCTs that have been done.
I also knew (from previous personal confusion) that those clinical trials (the RCTs) had generated a number of publications reporting secondary outcomes (mainly biomarker related analyses), and that referencing to the original publication was often done in a relaxed manner, at least sometimes without using for example the clinicaltrials.gov-identifier.
As for the particular study “re-used” in the meta-analysis it is explicitly stated that “effects of ketamine on general depressive symptoms in the majority of BD subjects presented here (33/36, 92%) were previously reported (44, 45)” (from Lally et al, Transl Psych 2014)
So, while including this third study in the meta-analysis was certainly done inadvertently I think it highlights a few problems:
The authors did not seem to know the literature [well enough] prior to writing the review / meta-analyses, and what they lacked in being up to date wasn’t compensated for by “hard work” (actually reading the studies, thoroughly…).
Whoever peer-reviewed their study didn’t do a decent job.
And, again, in a broader sense I think it indicates a general tendency that more and more reviews and meta-analyses are coming out simply because it is a easy way to get academic credit for work that is so less cumbersome than actually contributing by doing clinical trials. And even if those three studies with a total of 69 study participants all were unique individuals, doing a meta-analyses of such a small population of studies all from the same group really doesn’t sound very appealing.
Fraught meta-analyses have reached epidemic proportions, according to John Ioannidis, who argues that not only are they frequently superfluous, they teem with with undeclared conflicts of interest and misleading claims, particularly about medications.
As to how often other researchers cite those publications incorrectly I don’t have a clue. I don’t really think that is a problem. The proliferation of one clinical trial into many publications – in which is not very clearly stated that these are patients previously described in previous publication – ,might confuse readers into believing many more clinical trials have actually been done than is the case. A simple solution (or at least an attempt at one) would be to always include a NCT Number, which would make cross-checking so much easier, right?
Ekstrand also said the journal didn’t exactly burn rubber on the retraction:
Time-wise, it took like half a year, from submitting a letter to the editor to retraction. Going back an checking my e-mails, I see I actually bitched a bit about it taking some time:
“We have submitted a short commentary on a meta-analysis published in your journal. In our commentary, we point out that the results from that meta-analysis is not valid, as the authors inadvertently have included several studies with the same patient cohort. As a consequence, the number of study subjects is inflated (doubled in fact), and the result fallacious.. We submitted our comment on [Dec. 31, 2017]. We sympathise with the amount of (pro bono) work that goes into editing a journal, but feel our submitted commentary is so straight forward and unproblematic that four months for a decision is overdue. I look forward to hearing from you soon to resolve this issue.”
Ekstrund sent that email in mid-April.
Like Retraction Watch? You can make a tax-deductible contribution to support our growth, follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up for an email every time there’s a new post (look for the “follow” button at the lower right part of your screen), or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at firstname.lastname@example.org.