An external probe has concluded that a researcher based at the University of Gothenburg committed misconduct in multiple papers, all of which should be withdrawn.
Among 10 papers by Suchitra Sumitran-Holgersson at the University of Gothenburg, an Expert Group concluded that eight contained signs of scientific misconduct. The Expert Group, part of Sweden’s Central Ethical Review Board, also found evidence of problems within her laboratory environment.
In an email to Retraction Watch, Sumitran-Holgersson denied any “willful manipulation of data.”
According to the report (in Swedish, which we translated using Google):
After being “blindsided” a few months ago when she was told one of her 2005 papers was going to be retracted, a researcher scrambled to get information about why. And when she didn’t like the answers, she took to PubPeer.
Eight days ago, Shalon (Babbitt) Ledbetter, the first author of the 2005 paper published in Cell, posted a comment on the site announcing the paper was going to be retracted after the last author’s institution, Saint Louis University (SLU), determined that some figures had been manipulated by the last author, Dorota Skowrya. A letter dated September 2, 2015 sent by SLU to Cell describes the results of the investigation — namely, that the manipulations were “cosmetic,” and had no effect on the data or the conclusions. More than two years later, Ledbetter learned the journal was planning to retract the paper, and an initial draft of the notice wouldn’t identify who was responsible; she has since been pulled into a confusing web of blame-shifting and conflicting information that has been, in her words, “heartbreaking.”
In a recent editorial, the Journal of Neurochemistry declared it would no longer accept author-suggested reviewers. While other journals have done the samein order to preventfake reviews, the Journal of Neurochemistry is basing its decision on a different logic. We spoke with editor Jörg Schulz about why he believes relying on reviewers picked by editors helps reduce bias in the peer-review process.
Retraction Watch: What prompted you to compare the outcomes of papers reviewed by experts suggested by authors versus experts selected by editors, or experts the authors “opposed?”
A cancer researcher based at The Ohio State University has retracted five papers from one journal, citing concerns about figures.
The notices for all five papers state the Journal of Biological Chemistry raised questions about some figures, and the authors were not able to supply raw data in all instances. Four of the notices say the authors offered to submit data from repeat experiments and corrected figures, which the journal declined.
According to Kaoru Sakabe, data integrity manager at JBC, the authors “agreed to withdraw these articles after we declined their offers.”
Retractions take too long, carry toomuch of a stigma, and often provide too little information about what went wrong. Many people agree there’s a problem, but often can’t concur on how to address it. In one attempt, a group of experts — including our co-founder Ivan Oransky — convened at Stanford University in December 2016 to discuss better ways to address problems in the scientific record. Specifically, they explored which formats journals should adopt when publishing articleamendments — such as corrections or retractions. Although the group didn’t come to a unanimous consensus (what group does?), workshop leader Daniele Fanelli (now at the London School of Economics) and two co-authors (John Ioannidis and Steven Goodman at Stanford) published a new proposal for how to classify different types of retractions. We spoke to Fanelli about the new “taxonomy,” and why not everyone is on board.
Retraction Watch: What do you think are the biggest issues in how the publishing industry deals with article amendments?
Nowadays, there are many ways to access a paper — on the publisher’s website, on MEDLINE, PubMed, Web of Science, Scopus, and other outlets. So when the publisher retracts a paper, do these outlets consistently mark it as such? And if they don’t, what’s the impact? Researchers Caitlin Bakker and Amy Riegelman at the University of Minnesota surveyed more than one hundred retractions in mental health research to try to get at some answers, and published their findings in the Journal of Librarianship and Scholarly Communication. We spoke to Bakker about the potential harm to patients when clinicians don’t receive consistent notifications about retracted data.
Retraction Watch: You note: “Of the 144 articles studied, only 10 were represented as being retracted across all resources through which they were available. There was no platform that consistently met or failed to meet all of [the Committee on Publication Ethics (COPE)’s] guidelines.” Can you say more about these findings, and the challenges they may pose?
We reached out to many of the corresponding authors on papers in the January 26 issue, the seventh issue published in 2018. Many are based at leading institutions around the world; all had submitted their manuscripts months ago. Some noted that they were surprised by the decision, as the review process appeared quite rigorous; some told us that if they’d known the journal was going to be delisted, they would not have submitted their papers there.