Archive for the ‘society journal retractions’ Category
The correction replaces an expression of concern on the Journal of the American Chemical Society (JACS) paper, which followed allegations of data manipulation. It provides some un-cropped images, and removes a co-author from the paper. However, it does not appear to address previous allegations of misconduct, nor a recent ruling from an investigation at Hong Kong University (HKU), which found that some of the data were “invalid.”
Citation omissions in an economics preprint have set off a wave of recrimination and speculation on a widely read economics discussion board.
Commenters accuse the authors of purposely omitting citations that would have undermined the paper’s claims to novelty and contributions to the field, leveling acrimony and personal attacks. Economists Petra Persson at Stanford and Maya Rossin-Slater at the University of California, Santa Barbara told us they hadn’t been familiar with the omitted papers at the time they first posted their preprint, but their work remains distinct from these previous studies. Nevertheless, the two quickly updated the preprint of their paper – accepted by the top-tier economics journal American Economic Review – to include additional citations. An editor at the journal said it’s not unusual for authors to request such changes before publication, and dismissed the accusations made on the discussion board, calling the site “not a legitimate source of information.”
The study, “Family Ruptures, Stress, and the Mental Health of the Next Generation,” used data from Swedish national databases to compare mental health outcomes of people born to women who lost a relative while pregnant and women who lost a relative in the first year after giving birth. Read the rest of this entry »
When authors get new data that revise a previous report, what should they do?
In the case of a 2015 lung cancer drug study in the New England Journal of Medicine (NEJM), the journal published a letter to the editor with the updated findings.
Shortly after the paper was published, a pharmaceutical company released new data showing the drug wasn’t quite as effective as it had seemed. Once the authors included the new data in their analysis, they adjusted their original response rate of 59% — hailed as one of a few “encouraging results” in an NEJM editorial at the time of publication — to 45%, as they write in the letter. One of the authors told us they published the 2015 paper using less “mature” data because the drug’s benefits appeared so promising, raising questions about when to publish “exciting but still evolving data.”
It’s not a correction, as the original paper has not been changed; it doesn’t even contain a flag that it’s been updated. But among the online letters about the paper is one from the authors, “Update to Rociletinib Data with the RECIST Confirmed Response Rate,” which provides the new data and backstory:
A psychology journal is correcting a paper for reusing data. The editor told us the paper is a “piecemeal publication,” not a duplicate, and is distinct enough from the previous article that it is not “grounds for retraction.”
The authors tracked the health and mood of 65 patients over nine weeks. In one paper, they concluded that measures of physical well being and psychosocial well being positively predict one another; in the other (the now corrected paper), they concluded that health and mood (along with positive emotions) influence each other in a self-sustaining dynamic.
New evidence suggests a retracted paper was felled not by intentional manipulation — as it first appeared — but by a software glitch.
In 2014, we reported that Biochemical Journal had retracted a paper on suspicion it contained “shoddy Photoshopping” — someone appeared to have blacked out a control lane in one figure. Now there’s evidence that it wasn’t done on purpose: An investigation at Duke into eight papers, including the Biochemical Journal paper, did not find evidence of misconduct; lead author Paul Kuo, currently chair of surgery at Loyola Medicine, told us that a glitch in the software caused the black box. Nevertheless, the journal does not plan to un-retract the paper. Read the rest of this entry »
A major medical journal has updated its instructions to authors, now requiring that they publish protocols of clinical trials, along with any changes made along the way.
We learned of this change via the COMPare project, which has been tracking trial protocol changes in major medical journals — and been critical of the Annals of Internal Medicine‘s response to those changes. However, Darren Taichman, the executive deputy editor of the journal, told us the journal’s decision to publish trial protocols was a long time coming: Read the rest of this entry »
It was one of the most difficult posts we’ve ever written: A researcher’s eagerness to publish a paper before asking all co-authors for their permission forced him to retract the article, wasting a postdoc’s time and destroying a professional relationship in the process.
This 2011 post wasn’t difficult to write because the facts were complex; they weren’t particularly (although the science involved was intricate). Rather, the man responsible for the incident, Graham Ellis-Davies, was so clearly and sincerely distressed by the mistake he’d made, it was impossible not to feel sorry for the him.
Well, we’re delighted to report that the tale has a happy ending. Ellis-Davies and his former postdoc have recently republished their once-retracted work with a new set of co-authors — and in the same journal that previous retracted it. What’s more, they have turned what initially was a proof-of-concept study into a much more robust article with exciting implications for the field. Read the rest of this entry »
We’ve found another retraction for Erin Potts-Kant, a former researcher at Duke, bringing her total to 15.
Yesterday we reported on two new retractions for Potts-Kant in PLoS ONE, which earned her a spot in the top 30 on our leaderboard. As with the others, the latest paper, in the Journal of Clinical Investigation, is marred by “unreliable” data.
A former postdoc at the University of Pittsburgh has admitted to committing research misconduct in published papers and in National Institutes of Health (NIH) grant applications.
Kang Cheng prepared the gels when he was a research fellow in last author Sanjeev Gupta‘s lab at the Albert Einstein College of Medicine. Gupta told us he reviewed the original gels, and the errors didn’t affect the conclusions in the papers, which were reproducible. He noted he believes the problems are the result of honest mistakes:
The errors did not confer any benefits whatsoever either for the papers or for Dr. Cheng.
On PubPeer, commenters have raised questions about the now corrected papers — along with several others on which Gupta is the senior author, but Cheng is not a co-author.
Edward Burns, research integrity officer at Einstein, told us that the medical school looked into an allegation of misconduct against Gupta: