In June, Gene Emery, a journalist for Reuters Health, was assigned to write a story about an upcoming paper in the Journal of the American College of Cardiology, set to come off embargo and be released to the public in a few days. Pretty quickly, he noticed something seemed off.
Emery saw that the data presented in the tables of the paper — about awareness of the problem of heart disease among women and their doctors — didn’t seem to match the authors’ conclusions. For instance, on a scale of 1 to 5 rating preparedness to assess female patients’ risk (with 5 being the most prepared), 64% of doctors answered 4 or 5; but the paper said “only a minority” of doctors felt well-prepared (findings echoed in an accompanying press release). On Monday June 19, four days before the paper was set to publish, Emery told the corresponding author — C. Noel Bairey Merz, Medical Director of the Women’s Heart Center at Cedars-Sinai in Los Angeles — about the discrepancy; she told him to rely on the data in the table.
But the more Emery and his editors looked, the more problems they found with the paper. They alerted the journal hours before it was set to publish, hoping that was enough to halt the process. It wasn’t.
Here are more details about the timeline: After Emery submitted his draft to Reuters Health, an editor noticed another discrepancy. Again, the text of the paper seemed to downplay doctors’ perception of heart disease as a “top concern” among their female patients, which Emery also passed along to Merz on Wednesday, June 21. On the morning of Thursday, June 22, the day the paper was set to be released, Nancy Lapid, the editor of Reuters Health, contacted the journal, outlining the problems her team had identified with the paper. She concluded the message with:
We hope these issues can be addressed before the paper is released online.
Hours later, the uncorrected version of the paper was published. Although Reuters Health decided not to publish a story about the study, many others, including CBS News, reported the misleading data without noting the discrepancies. Recently, the journal issued an extensive correction to the paper, noting the problems Reuters Health identified in June. (An accompanying editorial had to make significant changes, as well.)
We contacted JACC about the timing of the changes, as well as whether it considered retracting the paper, given the extent of the changes. Editor Valentin Fuster at Mount Sinai sent us this statement:
For the manuscript, entitled “Knowledge, Attitudes, and Beliefs Regarding Cardiovascular Disease in Women”, the JACC Editors were concerned that the commentary describing the survey data exaggerated the findings—not that the survey findings themselves (found in the tables) were incorrect. This constitutes grounds for and led to the correction. While we always appreciate media inquiries, the Editors and the authors were the parties who agreed that this correction was necessary. The manuscript has been permanently and clearly updated to reflect the correction.
A spokesperson for the American College of Cardiology told us:
We are working to backtrack our steps on the press release and will be updating the release with an editor’s note linking to the errata and noting updated language in the press release.
She confirmed the journal would not issue a new release, just “adding the editor’s note on all the platforms on which the release was originally issued.”
Here are some excerpts from the long correction for “Knowledge, Attitudes, and Beliefs Regarding Cardiovascular Disease in Women,” in which “The authors of this paper acknowledge that the findings from the survey reported in this article were not fairly reflected in the presentation of the results or in the discussion of their implication:”
Page 123, Abstract, last 2 sentences in Results section:
CVD was a top concern for only 39% of PCPs, after weight and breast health. A minority of physicians (22% of PCPs and 42% of cardiologists) felt well prepared to assess women’s CVD risk and used guidelines infrequently.
should have read:
CVD was rated as the top concern by only 39% of PCPs, after weight and breast health. Only 22% of PCPs and 42% of cardiologists (p = 0.0477) felt extremely well prepared to assess CVD risk in women, while 42% and 40% felt well-prepared (p = NS), respectively. Few comprehensively implemented guidelines.
Page 127, left column, second section, first 2 sentences:
Only 22% of PCPs and 42% of cardiologists (p = 0.0477) felt well prepared to assess CVD risk in women. Forty-nine percent of PCPs and 59% of cardiologists (p = 0.1030) reported that their medical training prepared them to assess the CVD risk in their female patients (Table 3).
should have read:
Only 22% of PCPs and 42% of cardiologists (p = 0.0477) felt extremely well prepared to assess CVD risk in women, while 42% and 40% felt well-prepared (p = NS), respectively. Forty-nine percent of PCPs and 59% of cardiologists (p = 0.1030) reported that their medical training prepared them to assess female patients’ CVD risk (Table 3).
We also contacted Merz, who forwarded to us an email correspondence between the authors and the journal dated July 7, which she said showed there were “differences of opinion about how to summarize data in tables and figures in the text.” In the July 7 email, Merz writes:
Summary statements in the text are exactly that – summaries of data explicitly depicted in tables and figures – in general, we aim not to repeat specific data in the text that is in tables and figures, but summarize and refer to the tables and figures. Because this seems to be such a concern, it is best to have the text specifically describe the data…
Merz told us:
While scientific writing typically does not repeat specific data in the text that is already shown in tables and figures, the editors chose to detail the specific data findings in both the text and tables/figures, presumably in response to the Reuters editor….
…the data in the paper were correct and did not change in erratum.
In 2014, Merz was a middle author on a 2014 JACC paper that issued 25 corrections soon after it was published, including changing mathematical symbols and adding text. She was also the second author on a 2015 retraction in The Journal of Clinical Endocrinology & Metabolism, after the authors couldn’t reproduce the findings.
Regarding the latest JACC paper, Lapid told us:
We see mistakes in journal articles, but nothing like this.
Emery said he was “astonished” it took the journal so long to address the errors. And the entire correction process could have been avoided if it hadn’t released the original version, he added:
All they had to do is sent out a notice saying we’re changing the embargo time so we can fix this stuff…We were betting that they would just retract it, and not let it get released…I expected a rewrite of the paper.
It wouldn’t have been the first time a journal withdrew a paper right before it was set to publish — in 2011, we reported that Archives of Internal Medicine took that action with only minutes to spare, “to allow time for review and statistical analysis of additional data not included in the original paper that the authors provided less than 24 hours before posting.” The editor of the journal is Rita Redberg of the University of California San Francisco Medical Center, also a co-author on the 2017 JACC paper.
This also isn’t the first time journalists have helped correct the scientific record — recently, the U.S. Centers for Disease Control and Prevention (CDC) corrected an article on Legionnaire’s disease after the Pittsburgh Post-Gazette revealed the researchers appeared to be trying to misrepresent their data.
Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our daily digest. Click here to review our Comments Policy. For a sneak peek at what we’re working on, click here.