‘The notices are utterly unhelpful’: A look at how journals have handled allegations about hundreds of papers

Andrew Grey

Retraction Watch readers may recall the names Jun Iwamoto and Yoshihiro Sato, who now sit in positions 3 and 4 of our leaderboard of retractions, Sato with more than 100. Readers may also recall the names Andrew Grey, Alison Avenell and Mark Bolland, whose sleuthing was responsible for those retractions. In a recent paper in in Accountability in Research, the trio looked at the timeliness and content of the notices journals attached to those papers. We asked them some questions about their findings.

Retraction Watch (RW): Your paper focuses on the work of Yoshihiro Sato and Jun Iwamoto. Tell us a bit about this case.

Continue reading ‘The notices are utterly unhelpful’: A look at how journals have handled allegations about hundreds of papers

How can universities and journals work together better on misconduct allegations?

Elizabeth Wager

Retractions, expressions of concern, and corrections often arise from reader critiques sent by readers, whether those readers are others in the field, sleuths, or other interested parties. In many of those cases, journals seek the input of authors’ employers, often universities. In a recent paper in Research Integrity and Peer Review, longtime scientific publishing consultant Elizabeth Wager and Lancet executive editor Sabine Kleinert, writing on behalf of the Cooperation & Liaison between Universities & Editors (CLUE) group, offer recommendations on best practice for these interactions. Here, they respond to several questions about the paper.

Retraction Watch (RW): Many would say that journals can take far too long to act on retractions and other signaling to readers about problematic papers. Journals (as well as universities) often point to the need for due process. So what would a “prompt” response look like, as recommended by the paper?

Continue reading How can universities and journals work together better on misconduct allegations?

What happened when a group of sleuths flagged more than 30 papers with errors?

Jennifer Byrne

Retraction Watch readers may recall the name Jennifer Byrne, whose work as a scientific sleuth we first wrote about four years ago, and have followed ever since. In a new paper in Scientometrics, Byrne, of New South Wales Health Pathology and the University of Sydney, working along with researchers including Cyril Labbé, known for his work detecting computer-generated papers, and Amanda Capes-Davis, who works on cell line identification, describe what happened when they approached publishers about errors in 31 papers. We asked Byrne several questions about the work.

Retraction Watch (RW): You focused on 31 papers with a “specific reagent error.” Can you explain what the errors were?

Continue reading What happened when a group of sleuths flagged more than 30 papers with errors?

Journal editor breaks protocol to thank an anonymous whistleblower

As Retraction Watch readers may recall, we’ve been highlighting — and championing — the work of anonymous whistleblowers throughout the 10-year history of the blog. Our support for such anonymity, however, is not universally shared. 

In 2011, for example, in our column at Lab Times (unfortunately no longer online), we wrote:

Continue reading Journal editor breaks protocol to thank an anonymous whistleblower

“[H]ow gullible reviewers and editors…can be”: An excerpt from Science Fictions

We’re pleased to present an excerpt from Stuart Ritchie’s new book, Science Fictions: How Fraud, Bias, Negligence, and Hype Undermine the Search for Truth.

One of the best-known, and most absurd, scientific fraud cases of the twentieth century also concerned transplants – in this case, skin grafts. While working at the prestigious Sloan-Kettering Cancer Institute in New York City in 1974, the dermatologist William Summerlin presaged Paolo Macchiarini—an Italian surgeon who in 2008 published a (fraudulent) blockbuster paper in the top medical journal the Lancet on his successful transplant of a trachea—by claiming to have solved the transplant-rejection problem that Macchiarini encountered. Using a disarmingly straightforward new technique in which the donor skin was incubated and marinated in special nutrients prior to the operation, Summerlin had apparently
grafted a section of the skin of a black mouse onto a white one, with no immune rejection. Except he hadn’t. On the way to show the head of his lab his exciting new findings, he’d coloured in a patch of the white mouse’s fur with a black felt-tip pen, a deception later revealed by a lab technician who, smelling a rat (or perhaps, in this case, a mouse), proceeded to use alcohol to rub off the ink. There never were any successful grafts on the mice, and Summerlin was quickly fired.

Continue reading “[H]ow gullible reviewers and editors…can be”: An excerpt from Science Fictions

Journals are failing to address duplication in the literature, says a new study

Mario Malički

How seriously are journals taking duplicated work that they publish? That was the question Mario Malički and colleagues set out to answer six years ago. And last month, they published their findings in Biochemia Medica.

The upshot? Journals have a lot of work to do. Continue reading Journals are failing to address duplication in the literature, says a new study

Which kind of peer review is best for catching fraud?

Serge Horbach

Is peer review a good way to weed out problematic papers? And if it is, which kinds of peer review? In a new paper in Scientometrics, Willem Halffman, of Radboud University, and Serge Horbach, of Radboud University and Leiden University, used our database of retractions to try to find out. We asked them several questions about the new work.

Retraction Watch (RW): You write that “journals’ use of peer review to identify fraudulent research is highly contentious.” Can you explain what you mean? Continue reading Which kind of peer review is best for catching fraud?

Want to tell if a paper has been retracted? Good luck

Caitlin Bakker

Nowadays, there are many ways to access a paper — on the publisher’s website, on MEDLINE, PubMed, Web of Science, Scopus, and other outlets. So when the publisher retracts a paper, do these outlets consistently mark it as such? And if they don’t, what’s the impact? Researchers Caitlin Bakker and Amy Riegelman at the University of Minnesota surveyed more than one hundred retractions in mental health research to try to get at some answers, and published their findings in the Journal of Librarianship and Scholarly Communication. We spoke to Bakker about the potential harm to patients when clinicians don’t receive consistent notifications about retracted data.

Retraction Watch: You note: “Of the 144 articles studied, only 10 were represented as being retracted across all resources through which they were available. There was no platform that consistently met or failed to meet all of [the Committee on Publication Ethics (COPE)’s] guidelines.” Can you say more about these findings, and the challenges they may pose?

Continue reading Want to tell if a paper has been retracted? Good luck

20 years of retractions in China: More of them, and more misconduct

Lei Lei

After reviewing nearly 20 years of retractions from researchers based in China, researchers came up with some somewhat unsurprising (yet still disheartening) findings: The number of retractions has increased (from zero in 1997 to more than 150 in 2016), and approximately 75% were due to some kind of misconduct. (You can read more details in the paper, published this month in Science and Engineering Ethics.) We spoke with first author Lei Lei, based in the School of Foreign Languages at Huazhong University of Science and Technology, about what he thinks can be done to improve research integrity in his country.

Retraction Watch: With “Lack of Improvement” right in the title (“Lack of Improvement in Scientific Integrity: An Analysis of WoS Retractions by Chinese Researchers (1997-2016)”), you sound disappointed with your findings.  What findings did you expect — or at least hope — to find, and what are your reactions to the results you did uncover?

Continue reading 20 years of retractions in China: More of them, and more misconduct

Why do researchers commit misconduct? A new preprint offers some clues

Daniele Fanelli

“Why Do Scientists Fabricate And Falsify Data?” That’s the start of the title of a new preprint posted on bioRxiv this week by researchers whose names Retraction Watch readers will likely find familiar. Daniele Fanelli, Rodrigo Costas, Ferric Fang (a member of the board of directors of our parent non-profit organization), Arturo Casadevall, and Elisabeth Bik have all studied misconduct, retractions, and bias. In the new preprint, they used a set of papers from PLOS ONE shown in earlier research to have included manipulated images to test what factors were linked to such misconduct. The results confirmed some earlier work, but also provided some evidence contradicting previous findings. We spoke to Fanelli by email.

Retraction Watch (RW): This paper builds on a previous study by three of your co-authors, on the rate of inappropriate image manipulation in the literature. Can you explain how it took advantage of those findings, and why that was an important data set? Continue reading Why do researchers commit misconduct? A new preprint offers some clues