Do men or women retract more? A study found the answer is … complicated 

A new study compares retraction rates between men and women.
Pexels

Longtime Retraction Watch readers know the scientists on our Leaderboard have changed over the years. But one characteristic has remained relatively constant: There are few women on that list – in fact, never rarely more than one at a time.

So when a recent paper dove into whether retraction rates vary by the gender of the authors, we were curious what the authors found.

The team, from Sorbonne Study Group on Methods of Sociological Analysis (GEMASS) in Paris, sampled 1 million articles from the OpenAlex database, then referenced the Retraction Watch database to compare against their sample. 

Continue reading Do men or women retract more? A study found the answer is … complicated 

‘A threat to the integrity of scientific publishing’: How often are retracted papers marked that way?

Caitlin Bakker

How well do databases flag retracted articles?

There has been a lot of interest recently in the quality of retraction notices and notifications, including new guidelines from the National Information Standards Organization (NISO; our Ivan Oransky was a member of the committee) and a new study which Ivan and our Alison Abritis joined.

In another new paper, “Identification of Retracted Publications and Completeness of Retraction Notices in Public Health,” a group of researchers set out to study “how clearly and consistently retracted publications in public health are being presented to researchers.”

Spoiler alert: Not very.

We asked corresponding author Caitlin Bakker, of the University of Regina — who also chaired the NISO committee — some questions about the findings and their implications.

Continue reading ‘A threat to the integrity of scientific publishing’: How often are retracted papers marked that way?

‘The notices are utterly unhelpful’: A look at how journals have handled allegations about hundreds of papers

Andrew Grey

Retraction Watch readers may recall the names Jun Iwamoto and Yoshihiro Sato, who now sit in positions 3 and 4 of our leaderboard of retractions, Sato with more than 100. Readers may also recall the names Andrew Grey, Alison Avenell and Mark Bolland, whose sleuthing was responsible for those retractions. In a recent paper in in Accountability in Research, the trio looked at the timeliness and content of the notices journals attached to those papers. We asked them some questions about their findings.

Retraction Watch (RW): Your paper focuses on the work of Yoshihiro Sato and Jun Iwamoto. Tell us a bit about this case.

Continue reading ‘The notices are utterly unhelpful’: A look at how journals have handled allegations about hundreds of papers

How can universities and journals work together better on misconduct allegations?

Elizabeth Wager

Retractions, expressions of concern, and corrections often arise from reader critiques sent by readers, whether those readers are others in the field, sleuths, or other interested parties. In many of those cases, journals seek the input of authors’ employers, often universities. In a recent paper in Research Integrity and Peer Review, longtime scientific publishing consultant Elizabeth Wager and Lancet executive editor Sabine Kleinert, writing on behalf of the Cooperation & Liaison between Universities & Editors (CLUE) group, offer recommendations on best practice for these interactions. Here, they respond to several questions about the paper.

Retraction Watch (RW): Many would say that journals can take far too long to act on retractions and other signaling to readers about problematic papers. Journals (as well as universities) often point to the need for due process. So what would a “prompt” response look like, as recommended by the paper?

Continue reading How can universities and journals work together better on misconduct allegations?

What happened when a group of sleuths flagged more than 30 papers with errors?

Jennifer Byrne

Retraction Watch readers may recall the name Jennifer Byrne, whose work as a scientific sleuth we first wrote about four years ago, and have followed ever since. In a new paper in Scientometrics, Byrne, of New South Wales Health Pathology and the University of Sydney, working along with researchers including Cyril Labbé, known for his work detecting computer-generated papers, and Amanda Capes-Davis, who works on cell line identification, describe what happened when they approached publishers about errors in 31 papers. We asked Byrne several questions about the work.

Retraction Watch (RW): You focused on 31 papers with a “specific reagent error.” Can you explain what the errors were?

Continue reading What happened when a group of sleuths flagged more than 30 papers with errors?

Journal editor breaks protocol to thank an anonymous whistleblower

As Retraction Watch readers may recall, we’ve been highlighting — and championing — the work of anonymous whistleblowers throughout the 10-year history of the blog. Our support for such anonymity, however, is not universally shared. 

In 2011, for example, in our column at Lab Times (unfortunately no longer online), we wrote:

Continue reading Journal editor breaks protocol to thank an anonymous whistleblower

“[H]ow gullible reviewers and editors…can be”: An excerpt from Science Fictions

We’re pleased to present an excerpt from Stuart Ritchie’s new book, Science Fictions: How Fraud, Bias, Negligence, and Hype Undermine the Search for Truth.

One of the best-known, and most absurd, scientific fraud cases of the twentieth century also concerned transplants – in this case, skin grafts. While working at the prestigious Sloan-Kettering Cancer Institute in New York City in 1974, the dermatologist William Summerlin presaged Paolo Macchiarini—an Italian surgeon who in 2008 published a (fraudulent) blockbuster paper in the top medical journal the Lancet on his successful transplant of a trachea—by claiming to have solved the transplant-rejection problem that Macchiarini encountered. Using a disarmingly straightforward new technique in which the donor skin was incubated and marinated in special nutrients prior to the operation, Summerlin had apparently
grafted a section of the skin of a black mouse onto a white one, with no immune rejection. Except he hadn’t. On the way to show the head of his lab his exciting new findings, he’d coloured in a patch of the white mouse’s fur with a black felt-tip pen, a deception later revealed by a lab technician who, smelling a rat (or perhaps, in this case, a mouse), proceeded to use alcohol to rub off the ink. There never were any successful grafts on the mice, and Summerlin was quickly fired.

Continue reading “[H]ow gullible reviewers and editors…can be”: An excerpt from Science Fictions

Journals are failing to address duplication in the literature, says a new study

Mario Malički

How seriously are journals taking duplicated work that they publish? That was the question Mario Malički and colleagues set out to answer six years ago. And last month, they published their findings in Biochemia Medica.

The upshot? Journals have a lot of work to do. Continue reading Journals are failing to address duplication in the literature, says a new study

Which kind of peer review is best for catching fraud?

Serge Horbach

Is peer review a good way to weed out problematic papers? And if it is, which kinds of peer review? In a new paper in Scientometrics, Willem Halffman, of Radboud University, and Serge Horbach, of Radboud University and Leiden University, used our database of retractions to try to find out. We asked them several questions about the new work.

Retraction Watch (RW): You write that “journals’ use of peer review to identify fraudulent research is highly contentious.” Can you explain what you mean? Continue reading Which kind of peer review is best for catching fraud?

Want to tell if a paper has been retracted? Good luck

Caitlin Bakker

Nowadays, there are many ways to access a paper — on the publisher’s website, on MEDLINE, PubMed, Web of Science, Scopus, and other outlets. So when the publisher retracts a paper, do these outlets consistently mark it as such? And if they don’t, what’s the impact? Researchers Caitlin Bakker and Amy Riegelman at the University of Minnesota surveyed more than one hundred retractions in mental health research to try to get at some answers, and published their findings in the Journal of Librarianship and Scholarly Communication. We spoke to Bakker about the potential harm to patients when clinicians don’t receive consistent notifications about retracted data.

Retraction Watch: You note: “Of the 144 articles studied, only 10 were represented as being retracted across all resources through which they were available. There was no platform that consistently met or failed to meet all of [the Committee on Publication Ethics (COPE)’s] guidelines.” Can you say more about these findings, and the challenges they may pose?

Continue reading Want to tell if a paper has been retracted? Good luck