Authors have retracted a highly cited JAMA Psychiatry study about depression after failing to account for some patient recoveries, among other mistakes.
It’s a somewhat unusual notice — it explains that the paper has been retracted and replaced with a new, corrected version.
The study, which included 452 adults with major depressive disorder, concluded that cognitive therapy plus medication works better to treat depression than pills alone. But after it was published, a reader pointed out that some of the numbers in a table were incorrect. The authors reviewed the data and redid their analysis, and discovered “a number of pervasive errors.”
The notice (termed “notice of retraction and replacement”) explains the consequences of those errors:
These errors, once corrected, have not changed the final conclusion of this study—that cognitive therapy combined with antidepressant medication treatment enhanced rates of recovery relative to treatment with medication alone. However, the corrections do result in changes to numerous data in the Abstract, text, Table, and Figures. Three of the findings that were previously reported as statistically significantly different are no longer significantly different: the interaction between recovery rate and severity, the number of patients who dropped out of each group, and the number of serious adverse events in each group. Because of these errors, we have reconducted our analyses with the correct data, have corrected all findings and interpretations, and have requested that JAMA Psychiatry retract and replace the original article.
They explain that the biggest issue was that they miscounted the patients who had recovered:
The major problem with our original reported data was that the automated algorithm that we used to track patient progress sometimes failed to recognize remissions or recoveries that occurred or it recognized only later instances. This resulted in missing recoveries in 13 patients. There were 5 additional patients who recovered in the combined treatment group (the correct number is 170 not 165), and 8 additional patients who recovered in the medications only group (the correct number is 148 not 140). In addition, 282 patients recovered earlier and 16 patients recovered later than was initially recognized in our analyses. Differences between the treatment conditions were largely unaffected but the median time to recovery was reduced. In the process of hand-checking our data, we also found 2 patients who were credited with remissions that they did not achieve (1 in each group). Correcting these errors affected the cell sizes reported in Figures 2 and 3 and the exact values of the tests reported and also resulted in a change in the interaction between severity and treatment condition, which is no longer a statistically significant interaction.
The correct version of the article now appears in JAMA Psychiatry, with the retracted version attached as a supplement. “Effect of cognitive therapy with antidepressant medications vs antidepressants alone on the rate of recovery in major depressive disorder” has been cited 25 times since it was published in 2014, according to Thomson Reuters Web of Science (which has labeled it a “highly cited paper,” based on the expected rate of citations in that particular field.)
The notice describes more errors in the paper, including a patient who was classified with the wrong diagnosis at intake, and a miscounting of patients who met the criteria for chronic depression and recurrent depression, were unemployed, and had previously used antidepressant medication.
The retraction notice explains how the errors came to light:
We apologize to the journal and its readers for the errors that were present in our original publication, and we appreciate the action taken by the reader who alerted us to the undercounts in the Table just described that prompted our complete review of the data, analyses, and conclusions. The relevant data and findings have now been corrected, and the Abstract, text, Table, and Figures 1, 2, and 3 in our article have been corrected and replaced online with a new supplement that includes a version of the original retracted article showing the original errors and a version of the replacement article showing what was corrected.
It’s not often we see such a transparent explanation of what went wrong with a paper — an example of “doing the right thing.”
The findings may have had an impact on clinical practice — a few months after the study appeared, Evidence-Based Mental Health published a commentary about it by Sharon C Sung at Duke-NUS, which discusses how it may have affected her treatment choices for patients:
These results suggest that high-quality medication management may be sufficient to obtain recovery for patients with less severe and/or chronic MDD. The more time- and cost-intensive ADM plus CT strategy may be best suited for patients with severe, non-chronic MDD. Pending replication of these findings, I would be more likely to refer patients with severe depression of less than 2 years duration to receive combined treatment as a first-line intervention. For those with less severe depression of longer duration, I would wait to see if they respond to ADM prior to augmenting with psychotherapy.
We’ve found three other corrections for last author Robert Gallop, who works at West Chester University. A correction note on “Preventing Depression among Early Adolescents in the Primary Care Setting,” published in Journal of Abnormal Child Psychology, explains “a paragraph describing intervention effects on explanatory style for negative events was accidentally left out.” The paper — on which Gallop is the last author — has been cited 89 times.
Gallop is the third of six authors on “Combined Medication and CBT for Generalized Anxiety Disorder With African American Participants: Reliability and Validity of Assessments and Preliminary Outcomes” in Behavior Therapy; according to the correction, there was an error in the statistical analysis, and in a few of the statistics provided on the sample. That paper has been cited four times. Gallop is the fourth of five authors on “A Tutorial on Count Regression and Zero-Altered Count Models for Longitudinal Substance Use Data,” a highly cited paper in Psychology of Addictive Behaviors with a relatively minor correction: an error in a URL in the paper. The paper has been cited 47 times.
We’ve reached out to first author Steven Hollon, who works at Vanderbilt University, to Gallop, and to the journal for more information on the retraction. We’ll update this post with anything else we learn.
Update, April 22 11:50 am:
Annette Flanagin, the Executive Managing Editor for The JAMA Network, told us more about why the article was both retracted and corrected:
As we stated when we announced our policy on use of retraction and replacement in an Editorial 2015, “Retractions are typically reserved for articles that have resulted from scientific misconduct, such as fabrication, falsification, or plagiarism, or from pervasive error for which the results cannot be substantiated.” (See reference below.) In this case, inadvertent errors resulted in changes to some of the findings, although the general conclusions of the study are unchanged. As we also noted in that Editorial, errors do occur, and if the errors are pervasive and result in a major change in the direction or significance of the findings, interpretations, and conclusions, and the science is considered reliable, we will consider retraction and replacement as an effective approach to ensuring transparency and an accurate scientific record.
Here’s the citation and link to the Editorial noted above:Heckers S, Bauchner H, Flanagin A. Retracting, Replacing, and Correcting the Literature for Pervasive Error in Which the Results Change but the Underlying Science Is Still Reliable. JAMA Psychiatry. 2015;72(12):1170-1171. doi:10.1001/jamapsychiatry.
2015.2278.http://archpsyc. jamanetwork.com/article.aspx? articleid=2466828
She reminded us that JAMA has invoked their retraction and replacement policy before — for a small study on using gamma rays to treat OCD that we reported on last year.
We spoke to Hollon by phone, who provided more thoughts on the retraction. He told us that the researchers had a data management service construct a system to track patients, which included the automated algorithm mentioned in the retraction notice. Hollon called it,
a beautiful machine that wasn’t handled with sufficient care. All of this is on me…I should have checked every algorithm.
He added:
It was kind of like turning a Maserati over to a teenager, and I drove it over the edge.
He said that he plans to be more careful in the future:
I’m personally embarrassed that this happened, but on the other hand, I’m enthusiastic and energized…I learned something in the process.
Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our new daily digest. Click here to review our Comments Policy. For a sneak peek at what we’re working on, click here.
This is interesting, esp. that James Coyne has previously had some qualms about the study: http://blogs.plos.org/mindthebrain/2014/10/15/benefit-adding-psychotherapy-treatment-antidepressants/
BTW JAMA should do something about the link to the retraction notice added to the original study: it’s black, barely visible, and the entire paper remains below.
Agreed. It took some real hunting to find the actual notice.
retract and replace the original article
You need a stronger term than “mega-correction”.
I always wonder about these studies where half the numbers, tables, and figures change yet the conclusions purportedly remain unchanged. How fortuitous!
And not just their numbers. As in all such data corrections in RCTS the knock-on effect on meta-analysis numbers when included (as this one is) are horrendous. Should they not be corrected.
The problems and mistakes in data collection of the retracted Hollon et al. paper are extremely concerning. However, the basic design of this study is so problematic that it can easily lead to erroneous data to begin with, much less worry if it is collected properly.
In this article: http://f1000research.com/articles/4-639/v1
Berger D. Double blinding requirement for validity claims in cognitive-behavioral therapy intervention trials for major depressive disorder. Analysis of Hollon S, et al., Effect of cognitive therapy with antidepressant medications vs antidepressants alone on the rate of recovery in major depressive disorder: a randomized clinical trial [version 1; referees: 1 approved]. F1000Research 2015, 4:639 (doi: 10.12688/f1000research.6954.1)
I describe how the conclusions in Hollon et al. could have easily been due to bias in the cognitive therapy group because the study was not single-, or double-blinded, had no placebo control group, and handicapped the medication arms that require double blinding to show efficacy (note, single-blind is defined as when the subject is blind, not the rater):
You can not have an unblinded study when the endpoints are SUBJECTIVE as they are in a study of depression. A small amount of bias in the Hollon et al. unblinded study could easily add up due the large N giving erroneous results of efficacy for cognitive therapy. Dr. Hollon himself was invited to and did made a rebuttal to this paper, I encourage readers of this blog to read his rebuttal to get a feel of his logic regarding the design and interpretation of a psychotherapy clinical trial for depression.
The FDA would never approve of an antidepressant that did not have confirmatory double-blind testing and placebo control, and neither would medical practitioners believe in such a drug. A drug study must have an exit analysis that shows the blind was maintained, and if it was not, the trial should be invalidated. And while some drugs may also carry a poor blind because of side effects etc., the crucial point is that many drugs can feasibly be blinded, but no psychotherapy can.
Clinical trial science can not allow a double standard for different modalities of differing level of control, nor mix these (i.e., unblinded drug arms that usually require blinding to show superiority and Cognitive Therapy that is unnblinded (thus with poor control) in the same study.
I looked at the relationships between the authors and Editors of the Hollon et al. article. I found a number of connections that would warrant JAMA Psychiatry to make a public notice that they were aware of them and what mechanisms they employed to avoid conflict-of-interests.
These are some of the relationships;
The director of the Department of Psychiatry at Vanderbilt University Dr. Stephan Heckers where Hollon is an Associate Professor of Psychiatry, also happens to also be the Editor-in-Chief of JAMA Psychiatry now, and was on the Editorial Board of JAMA Psychiatry at the time of Hollon et al.’s submission.
In the same issue, JAMA Psychiatry also published an Editorial by Thase praising the Hollon study: Thase ME. Large-Scale Study Suggests Specific Indicators for Combined Cognitive Therapy and Pharmacotherapy in Major Depressive Disorder JAMA Psychiatry. 2014;71(10):1101-1102.
Thase stated, “the article by Hollon and colleagues in this issue of JAMA Psychiatry describes the main findings of one of the most important studies ever undertaken to evaluate the merits of combining psychotherapy and pharmacotherapy for treatment of major depressive disorder.”
Thase and Hollon have previously worked together on the efficacy of medication and cognitive therapy in depression:
1. Thase ME, Friedman ES, Biggs MM, Wisniewski SR, Trivedi MH, Luther JF, Fava M, Nierenberg AA, McGrath PJ, Warden D, Niederehe G, Hollon SD, Rush AJ. Cognitive therapy versus medication in augmentation and switch strategies as second-step treatments: a STAR-D report. Am J Psychiatry. 2007 May;164(5):739-52.
2. Hollon SD, Jarrett RB, Nierenberg AA, Thase ME, Trivedi M, Rush AJ. Psychotherapy and medication in the treatment of adult and geriatric depression: which monotherapy or combined treatment? J Clin Psychiatry. 2005 Apr;66(4):455-68. Review.
Later, when Heckers was Editor-in-Chief, an article with Hollon as second author on the same Cognitive Therapy vs medication issue was published:
Erica S. Weitz, MA; Steven D. Hollon, PhD; et al. Baseline Depression Severity as Moderator of Depression Outcomes Between Cognitive Behavioral Therapy vs Pharmacotherapy. An Individual Patient Data Meta-analysis JAMA Psychiatry. Published online September 23, 2015.
The logic of this meta-analysis that looked at studies that compared antidepressants with or without blinding, with unblinded CBT is not scientifically valid. Blinding antidepressants handicap them when compared to unblinded CBT, and unblinded antidepressants impairs their ability to show efficacy when blinded compared with placebo (and without a confounding unblinded CBT arm as in this study), which is another kind of handicap.
Getting back to the f1000 article I published above. This was first submitted to JAMA Psychiatry and the only Editor interfacing with me was Dr. Heckers. It seemed strange to me that the Editor-in-Chief needed to handle a short communication article that was only 1-page long. However, the article was critical of a published JAMA article, and a member of Dr. Hecker’s staff. It is clearly a conflict of interest for an Editor who is also the Department Chair of an author whose paper is being criticized to review the critique paper.
While the referees noted some minor problems with the paper, many of the referees noted the importance and value of the paper. I will list some of the independent reviewers’ comments received from Dr. Heckers by e-mail on June 18, 2015 (letter is on file at Retraction Watch):
1. “By and large the author’s emphasis on problems in psychotherapy studies with blinding seems justified.”
2. “The author points out that clinical trials for CBT cannot be carried-out under double blind conditions as would be required of pharmacotherapy or other somatic therapies and thus the rigor of CBT interventional studies would be quite different from those modalities that can be studied under double blinded conditions.”
3. “This is an interesting study questioning the validity of the research in psychiatric issues where doubling blinding is impossible.”
4.”The study is important since there is a tendency to interpret the studies done in this field without the proper cautiousness needed due to the methodological obstacles.”
5. “The present study gives a good example in how a recent study published in a respected journal could be overemphasizing the results obtained and how a large N in a biased study only emphasis the bias and not the true effect.”
6. “Since this seems to be a general problem in the field of psychiatric research there are good reasons to publish the present study.”
7. “The author criticizes (with good reason) that blinding of the assessor of symptoms is called ‘single-blind.’ ”
It is surprising to me that neither of the multiple authors of Hollon et al., nor the JAMA Psychiatry editorial reviewers, nor the peer reviewers were able to find the “pervasive” problems in the article resulting in retraction. I am concerned that something is not right in the network of relationships between researchers that might bias our publication base.
Doug Berger, M.D., Ph.D.
U.S. Board Certified Psychiatrist