How should journals update papers when new findings come out?

NEJM Logo

When authors get new data that revise a previous report, what should they do?

In the case of a 2015 lung cancer drug study in the New England Journal of Medicine (NEJM), the journal published a letter to the editor with the updated findings.

Shortly after the paper was published, a pharmaceutical company released new data showing the drug wasn’t quite as effective as it had seemed. Once the authors included the new data in their analysis, they adjusted their original response rate of 59%  — hailed as one of a few “encouraging results” in an NEJM editorial at the time of publication — to 45%, as they write in the letter. One of the authors told us they published the 2015 paper using less “mature” data because the drug’s benefits appeared so promising, raising questions about when to publish “exciting but still evolving data.”

It’s not a correction, as the original paper has not been changed; it doesn’t even contain a flag that it’s been updated. But among the online letters about the paper is one from the authors, “Update to Rociletinib Data with the RECIST Confirmed Response Rate,” which provides the new data and backstory:

In our Journal article that was published on April 30, 2015,1 we described the activity of rociletinib, an epidermal growth factor receptor (EGFR) inhibitor with specificity for the T790M mutation, in patients with EGFR mutation–positive lung cancer in the phase 1 TIGER-X trial. The key finding was a response rate of 59% (95% confidence interval [CI], 45 to 73) among 46 patients with biopsy-proven T790M-mediated resistance to previously administered EGFR inhibitors. In November 2015, Clovis Oncology issued a press release that contained updated data from a pooled cohort of patients from TIGER-X and TIGER-2 (another phase 2 study of rociletinib), stating that the rate of confirmed response was 28 to 34%.2 Since these response rates differed substantially, the academic authors of the Journal article undertook an independent updated analysis that included the patients whose data were reported in that article.

It goes on to explain the large discrepancy between the response rates reported by the company and the April 2015 NEJM paper, “Rociletinib in EGFR-Mutated Non–Small-Cell Lung Cancer“:

TIGER-X had enrolled 130 patients when we reported the initial results,1 but the trial ultimately included 612 patients. The median follow-up reported in the article was 10.5 weeks; given scan intervals of 6 weeks, at least half the patients had undergone only one response-evaluation scan at the time of data cutoff; therefore, many reported responses were unconfirmed partial responses.

The paper has been cited 85 times according to Thomson Reuters Web of Science (which has labeled it a “highly cited paper” and a “hot paper,” based on the expected rate of citations in that particular field).

Taking the new data into account, the authors write in the letter that:

we update our initially reported response rate of 59% to a mature confirmed response rate of 45% among patients with T790M-disease who had enrolled early in TIGER-X. This reduction in activity is less dramatic than that suggested in the Clovis Oncology press release,2 hence future analyses of the larger data set will be critical. The desire to bring helpful therapies to patients with terminal disease quickly creates considerable pressure to disseminate promising results early. However, this case offers a clear lesson regarding the importance of continued follow-up and supports a strong recommendation to publish rates of confirmed response according to RECIST, thus providing the most rigorous and mature data. Ultimately, additional data are needed to clarify the role of rociletinib in treating patients with EGFR mutation–positive lung cancer.

The letter is authored by the paper’s first author Lecia V. Sequist at the Massachusetts General Hospital, second author Jean-Charles Soria at South-Paris University, and last author David Ross Camidge, at the University of Colorado.

We’re classifying this as an “update” and not a correction because the authors did not change the response rate in the paper itself. Sequist told us:

The letter is an addition to the original manuscript. It contains updated, more mature data.

The editor in chief of NEJM, Jeffrey M. Drazen, told us why the journal chose not to issue a formal correction notice to the 2015 paper:

Because the data in the report were accurate at the time of the report, the new information with longer follow up represented important new information. It is not correction of an error.  Therefore, we published the updated results as a letter.

A spokesperson for NEJM told us the journal added a link to the letter in the “related articles” box next to the paper. She said:

Technically there is no flag on the article, because those are saved for corrections.

Critics have recently called out NEJM for being slow to correct papers.

Sequist further explained the authors’ reasoning:

[W]e wanted the record to show the final mature results of the patients initially published and discuss the issues around publishing before mature results are available. Choosing the best moment to publish exciting but still evolving data is tricky and involves consideration of many factors among which are scientific methodology, patient advocacy, competitive landscape and others…. We hope that our letter and this entire rociletinib story will shed some light regarding the dialogue and thoughtful conversation needed around the process of drug development among [academia] and pharma.

Please also note that our study was still rapidly accruing at the time of the initial NEJM publication, so our manuscript is a report about a subset of patients who enrolled early, and not a final report of the entire TIGER-X cohort.

Drazen told us:

It is the nature of early studies of promising cancer drugs that the patients are not cured. Rather, evidence is found (often in those with advanced disease) that the tumor responds, but only for a while. This paradigm was established in our pages in 1948 and remains true today.

At the time of the publication of the rociletinib study in April 2015, the data for the study had been analyzed as of June 18, 2014, and the reported data were accurate up to that date.  Unfortunately but not unexpectedly, the cancer progressed in many patients in this trial who initially responded to treatment; some had progressive disease and some died.

A small number of patients who had experienced tumor regression from the treatment had their tumor enlarge after the initial response assessment and the closure of the data in June 2014.  Those patients had experienced tumor regression from the treatment, but the response was not maintained for two evaluation visits.  The tumor progressed in the interim after the initial response assessment and the closure of the data. Had we waited to publish until the final results were known, patients may have died for lack of treatment.

We checked in with the author of the NEJM editorial that called the original result “encouraging” — Ramaswamy Govindan, an oncologist at Washington University School of Medicine. He told us:

I commend the academic authors for going back and thoroughly reviewing an updated data set. These results are still quite encouraging as the options for patients with EGFR T790M mutant non-small cell lung cancer (NSCLC) are still quite limited. I agree with the authors’s conclusion regarding the importance of reporting follow up mature data and the need for further work in this area to fully understand the role of rociletenib in EGFR T790M mutant NSCLC.

Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our new daily digest. Click here to review our Comments Policy. For a sneak peek at what we’re working on, click here.

 

3 thoughts on “How should journals update papers when new findings come out?”

  1. Preliminary data, but novel. Nothing more preliminary than a single point and most of the time it will point to some wonderful novelty. That isn’t science.

    So there is no need to update papers.

    There is a need for journals to publish papers, rather than preliminary results. In the old days, conference proceeding were for preliminary data, some went on to a paper. Nowadays, preprint will do nicely for preliminary data and their update to the end of the study. AT this point authors can decide whether to go through publication or not.

    Anyone who submits data because they are promising and any editor who publishes such data demonstrates they are interested in fashion, impact factor, and the gloss of publicity, but not in science.

  2. “Scientific published results” or “research record” are terms that suggest that, somehow, publication represents an immutable record of data and interpretive conclusions. There is always a follow on to publications or presentations, and data analytics, and citation, allows that evolving research thread to be followed. Yes, there is value in publishing “preliminary results” because, if they are contrary to the previous results or conclusions, they serve as a flag that perhaps not all of the data-dependent variables have been identified. Assigning nefarious motives to communication between researchers whatever the format, is nonsensical.

  3. Having further data contradict the results of the interim analysis is not a problem. After all, it is what happens generally in medicine. Study 1 is run, then study 2 etc and after each study we can update the overall conclusions.

    What could be argued in this case is that the original results should never have been released. Patient numbers were too small and the study duration for many patients too short. Being an uncontrolled trial just compounds the problem, as we don’t know how much the result depends on patient selection.

    It would be worthwhile guidance for future publications that they only include subjects that have completed an appropriate time in study, and sufficient subjects have completed for adequate precision on the estimates.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.