Journal knew about problems in a high-profile study before it came out — and did nothing for over a month

In June, Gene Emery, a journalist for Reuters Health, was assigned to write a story about an upcoming paper in the Journal of the American College of Cardiology, set to come off embargo and be released to the public in a few days. Pretty quickly, he noticed something seemed off.

Emery saw that the data presented in the tables of the paper — about awareness of the problem of heart disease among women and their doctors — didn’t seem to match the authors’ conclusions. For instance, on a scale of 1 to 5 rating preparedness to assess female patients’ risk (with 5 being the most prepared), 64% of doctors answered 4 or 5; but the paper said “only a minority” of doctors felt well-prepared (findings echoed in an accompanying press release). On Monday June 19, four days before the paper was set to publish, Emery told the corresponding author — C. Noel Bairey Merz, Medical Director of the Women’s Heart Center at Cedars-Sinai in Los Angeles — about the discrepancy; she told him to rely on the data in the table.

But the more Emery and his editors looked, the more problems they found with the paper. They alerted the journal hours before it was set to publish, hoping that was enough to halt the process. It wasn’t.

Here are more details about the timeline: After Emery submitted his draft to Reuters Health, an editor noticed another discrepancy. Again, the text of the paper seemed to downplay doctors’ perception of heart disease as a “top concern” among their female patients, which Emery also passed along to Merz on Wednesday, June 21. On the morning of Thursday, June 22, the day the paper was set to be released, Nancy Lapid, the editor of Reuters Health, contacted the journal, outlining the problems her team had identified with the paper. She concluded the message with:

We hope these issues can be addressed before the paper is released online.

Hours later, the uncorrected version of the paper was published. Although Reuters Health decided not to publish a story about the study, many others, including CBS News, reported the misleading data without noting the discrepancies. Recently, the journal issued an extensive correction to the paper, noting the problems Reuters Health identified in June. (An accompanying editorial had to make significant changes, as well.)

We contacted JACC about the timing of the changes, as well as whether it considered retracting the paper, given the extent of the changes. Editor Valentin Fuster at Mount Sinai sent us this statement:

For the manuscript, entitled “Knowledge, Attitudes, and Beliefs Regarding Cardiovascular Disease in Women”, the JACC Editors were concerned that the commentary describing the survey data exaggerated the findings—not that the survey findings themselves (found in the tables) were incorrect. This constitutes grounds for and led to the correction. While we always appreciate media inquiries, the Editors and the authors were the parties who agreed that this correction was necessary. The manuscript has been permanently and clearly updated to reflect the correction.

A spokesperson for the American College of Cardiology told us:

We are working to backtrack our steps on the press release and will be updating the release with an editor’s note linking to the errata and noting updated language in the press release.

She confirmed the journal would not issue a new release, just “adding the editor’s note on all the platforms on which the release was originally issued.”

Here are some excerpts from the long correction for “Knowledge, Attitudes, and Beliefs Regarding Cardiovascular Disease in Women,” in which “The authors of this paper acknowledge that the findings from the survey reported in this article were not fairly reflected in the presentation of the results or in the discussion of their implication:”

Page 123, Abstract, last 2 sentences in Results section:

CVD was a top concern for only 39% of PCPs, after weight and breast health. A minority of physicians (22% of PCPs and 42% of cardiologists) felt well prepared to assess women’s CVD risk and used guidelines infrequently.

should have read:

CVD was rated as the top concern by only 39% of PCPs, after weight and breast health. Only 22% of PCPs and 42% of cardiologists (p = 0.0477) felt extremely well prepared to assess CVD risk in women, while 42% and 40% felt well-prepared (p = NS), respectively. Few comprehensively implemented guidelines.

And:

Page 127, left column, second section, first 2 sentences:

Only 22% of PCPs and 42% of cardiologists (p = 0.0477) felt well prepared to assess CVD risk in women. Forty-nine percent of PCPs and 59% of cardiologists (p = 0.1030) reported that their medical training prepared them to assess the CVD risk in their female patients (Table 3).

should have read:

Only 22% of PCPs and 42% of cardiologists (p = 0.0477) felt extremely well prepared to assess CVD risk in women, while 42% and 40% felt well-prepared (p = NS), respectively. Forty-nine percent of PCPs and 59% of cardiologists (p = 0.1030) reported that their medical training prepared them to assess female patients’ CVD risk (Table 3).

We also contacted Merz, who forwarded to us an email correspondence between the authors and the journal dated July 7, which she said showed there were “differences of opinion about how to summarize data in tables and figures in the text.” In the July 7 email, Merz writes:

Summary statements in the text are exactly that – summaries of data explicitly depicted in tables and figures – in general, we aim not to repeat specific data in the text that is in tables and figures, but summarize and refer to the tables and figures.  Because this seems to be such a concern, it is best to have the text specifically describe the data…

Merz told us:

While scientific writing typically does not repeat specific data in the text that is already shown in tables and figures, the editors chose to detail the specific data findings in both the text and tables/figures, presumably in response to the Reuters editor….

She added:

…the data in the paper were correct and did not change in erratum.  

In 2014, Merz was a middle author on a 2014 JACC paper that issued 25 corrections soon after it was published, including changing mathematical symbols and adding text. She was also the second author on a 2015 retraction in The Journal of Clinical Endocrinology & Metabolism, after the authors couldn’t reproduce the findings.

Regarding the latest JACC paper, Lapid told us:

We see mistakes in journal articles, but nothing like this.

Emery said he was “astonished” it took the journal so long to address the errors. And the entire correction process could have been avoided if it hadn’t released the original version, he added:

All they had to do is sent out a notice saying we’re changing the embargo time so we can fix this stuff…We were betting that they would just retract it, and not let it get released…I expected a rewrite of the paper.

It wouldn’t have been the first time a journal withdrew a paper right before it was set to publish — in 2011, we reported that Archives of Internal Medicine took that action with only minutes to spare, “to allow time for review and statistical analysis of additional data not included in the original paper that the authors provided less than 24 hours before posting.” The editor of the journal is Rita Redberg of the University of California San Francisco Medical Center, also a co-author on the 2017 JACC paper.

This also isn’t the first time journalists have helped correct the scientific record — recently, the U.S. Centers for Disease Control and Prevention (CDC) corrected an article on Legionnaire’s disease after the Pittsburgh Post-Gazette revealed the researchers appeared to be trying to misrepresent their data.

Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our daily digest. Click here to review our Comments Policy. For a sneak peek at what we’re working on, click here.

10 thoughts on “Journal knew about problems in a high-profile study before it came out — and did nothing for over a month”

  1. These are serious misrepresentations of the data. The journal editors appear to wanting to have it both ways . . . a splash newsworthy release and a later correction of what the data showed. Kudos to the journalists who picked up the obvious discrepancies. BUT given the pattern, folks need to dig into the methodological details of the survey. What was the original sampling plan and response rate of the survey? What checks on possible sources of bias based on the characteristics of the responders and non-responders. I suspect a deeper probe will find additional reasons still to be skeptical about the published report. There is a rebuttable presumption of dishonesty here for the authors and publisher to rebut.

  2. “Although Reuters Health decided not to publish a story about the study, many others, including CBS News, reported the misleading data without noting the discrepancies. ”

    Will the CBS News report and the many other lay reports be corrected?

    Will the corrections -if any – get the same media play as the original article?

    If the lay articles are corrected, how many of the initially mislead will read the correction?

    1. Good point, but do you expect media outlet, such as CBS, later go back and correct the story. These media outlets only want to fill their daily program, next day they forget about what was said a day before. More important than that story in CBS is the citation of the retracted article in several new articles. How those citations to a retracted paper is handled?

      1. “Good point, but do you expect media outlet, such as CBS, later go back and correct the story. ”
        No not for a filler piece on a nightly news program. Sometimes for a story with a byline they will correct. Most scientific reporters I know (the good ones that will write an in depth story) take their craft very seriously. No one likes to be mislead. If the scientific story was hyped, then retracted I’d expect to see some of the better scientific journalists do another story.

  3. Why only a journalist “noticed something seemed off” in the paper? What were the findings of the peer reviewers? Did the reviewers notice this issue and other ones? It will be helpful to make better judgment about the whole story if the journal makes the reviewers comments accessible to public. This story tarnishes the quality and value of peer review process.

  4. I think all but the very smallest errors warrant a complete rewrite and binning of the original version. I have made corrections to papers published by the Osiris team that is part of the the Rosetta mission to comet 67P/CG. One such correction prompted a corrigendum of five pages in length. It was to do with mapping anomalies where regional boundaries were placed up to 400 metres from their correct position on a 4km comet. The maps were difficult enough for the Osiris team to interpret let alone for scientists who were new to the detailed mapping of this particular comet.

    Imagine juggling the maps in the original paper with new replacement maps in the corrigendum plus written descriptions as to where the errors in boundaries lie. Then imagine every time you want to refer to a feature on the comet you have to dig out both the original paper and the corrigendum and work out whether your location of interest is located within a correct original map or a corrected corrigendum map. It’s asking for trouble and inviting additional yet avoidable errors to creep into new work.

    My colleague has recently identified one such error based on not observing the map corrections in the corrigendum. The relevant author (who also co-authored the original incorrect map paper) acknowledged my colleague’s correction of her new paper but “decided not to update it since this study is published and it will be hard to correct.” It’s a complete mess, with the original mistakes trickling down through the papers that cite the original.

    We need complete rewrites in all but the very simplest errors such as caption errors and typos.

  5. It is alarming that the JACC didn’t handle the problem appropriately. That being said, however, doesn’t make the paper’s data very compelling. As cardiologists, most of my colleagues and I worry more about patients weight, lipid panels, diabetes and other risk factors than specific groups of labelled patients, i.e. males vs. females, and treat everyone based on scientific data and drug trials. I do not worry about women’s CVD risk because I treat them with the same intensity and clarity as their male counterparts. Aggressive treatment to lower risk in ALL patients will have the same impact on women as well as men.
    I feel that the 78% of PCP’s who don’t feel competent in treating these women should refer them to cardiologists or other lipid specialists and stop worrying about their women patients any more than their male counterparts. Despite the articles “data”, trends of acute MI’s have continued to drop significantly in the last 5 years for both sexes. Politically correct journal articles with conflicting conclusions should not be published, especially when their so-called data were really just a collection of thoughts and opinions, and not compared with those doctors’ utilization of testing modalities, use of risk modification strategies and Rx of specific available medications and dietary counseling; THAT WOULD BE SCIENTIFIC!

    1. Better peer review, more scrutiny and early rejection makes for less half baked work like this in my in box. As a patient I’m disheartened as a medical professional I’m disgusted.

  6. John H Noble Jr
    These are serious misrepresentations of the data. The journal editors appear to wanting to have it both ways . . . a splash newsworthy release and a later correction of what the data showed. Kudos to the journalists who picked up the obvious discrepancies. BUT given the pattern, folks need to dig into the methodological details of the survey. What was the original sampling plan and response rate of the survey? What checks on possible sources of bias based on the characteristics of the responders and non-responders. I suspect a deeper probe will find additional reasons still to be skeptical about the published report. There is a rebuttable presumption of dishonesty here for the authors and publisher to rebut.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.