Retraction Watch

Tracking retractions as a window into the scientific process

Error in one line of code sinks cancer study

without comments

journl-of-clinical-oncologyAuthors of a 2016 cancer paper have retracted it after finding an error in one line of code in the program used to calculate some of the results.

Sarah Darby, last author of the now-retracted paper from the University of Oxford, UK, told Retraction Watch that the mistake was made by a doctoral student. When the error was realized, Darby said, she contacted the Journal of Clinical Oncology (JCO), explained the issue, and asked whether they would prefer a retraction or a correction. JCO wanted a retraction, and she complied.

The journal allowed the authors to publish a correspondence article outlining their new results.

Here’s the lengthy retraction notice, published online last month:

During further analyses of the data published in “Inferring the Effects of Cancer Treatment: Divergent Results From Early Breast Cancer Trialists’ Collaborative Group Meta-Analyses of Randomized Trials and Observational Data From SEER Registries” (J Clin Oncol 34:803-809, 2016), the authors discovered an error in one line of code in the computer program used to calculate the results presented in the right side of Tables 3 and A5.

Specifically, the error assigned the code “unknown” to the additional variables available in the SEER data set from 1990 onward for the majority of women. These extra variables, therefore, were not adequately taken into account in the columns labeled “Registrations Between 1990 and 2008 With Additional Stratification” in the right side of Tables 3 and A5 in the article. Results in the other tables in the article are not affected by this error.

Corrected versions of Tables 3 and A5 are presented in the accompanying PDF. The most important changes are for women with node-positive disease who were given mastectomy. Among these women, the breast cancer death rate ratio for those who were irradiated compared to those who were not was given in the original Table 3 as 1.32 (95% CI, 1.28 to 1.36), but in the corrected version it is now 0.89 (95% CI, 0.86 to 0.93), while the death rate ratio for all causes, which was given in the original Table 3 as 1.18 (95% CI, 1.15 to 1.22), is now 0.85 (95% CI, 0.81 to 0.88). Similar changes occur in the corrected version of Table A5. Other results in the right side of these two tables and in the related footnotes have also changed, but by smaller amounts.

In the corrected analyses, important differences between the randomized Early Breast Cancer Trialists’ Collaborative Group (EBCTCG) data and the observational SEER data still remain, even with the additional stratification that can be performed for women registered from 1990. For example, in footnote f of Table 3, mortality from breast cancer in women with one to three positive nodes who received mastectomy is significantly higher with radiotherapy in the observational SEER data, in direct contrast to the EBCTCG data in that subgroup. The differences are summarized elsewhere.(1)

Although the overall conclusion is still valid (ie, that the observational SEER data can be misleading regarding causal effects of treatment), the incorrect results from the analyses of the SEER data for 1990 to 2008 played a major role in the original article. To mitigate any confusion due to this unfortunate error, the authors have unanimously requested that the article be fully retracted and that the updated findings be published separately in the Correspondence section of the journal. The authors apologize to Journal of Clinical Oncology and to its readers and reviewers.

The original paper, which was published online in January, has been cited once, according to Thomson Reuters Web of Science.

Darby and her colleagues used patient data from the cancer registry “Surveillance, Epidemiology, and End Results” (SEER) to compare the effects of radiotherapy following breast cancer, based on data from observational studies and randomized trials.

Both the retracted study and the new analysis come to the same conclusion: Results from randomized and non-randomized studies can differ. As the authors note in their updated analysis:

We conclude, as have others, that nonrandomized comparisons are liable to provide misleading estimates of treatment effects. Therefore, they need careful justification every time they are used.

We’re giving Darby and colleagues a “doing the right thing” nod for issuing a lengthy notice and taking action after spotting the mistake.

Hat tip: Rolf Degen

Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our daily digest. Click here to review our Comments Policy. For a sneak peek at what we’re working on, click here.