Do men or women retract more often? A new study weighs in

The male/female retraction ratio for Zheng and colleagues’ dataset showed that male first authors have a higher retraction rate than females.  Source: E-T Zheng et al/J of Informetrics 2025

When you look at retracted papers, you find more men than women among the authors. But more papers are authored by men than women overall. A recent study comparing retraction rates, not just absolute numbers, among first and corresponding authors confirms that men retract disproportionally more papers than women. 

The paper, published May 20 in the Journal of Informetrics, is the first large-scale study using the ratio of men’s and women’s retraction rates, said study coauthor Er-Te Zheng, a data scientist at The University of Sheffield. The researchers also analyzed gender differences in retractions across scientific disciplines and countries.

Zheng and his colleagues examined papers from a database of over 25 million articles published from 2008 to 2023, about 22,000 of which were retracted. They collected the reasons for retraction from the Retraction Watch Database, and used several software tools to infer each author’s gender based on name and affiliated country. 

Continue reading Do men or women retract more often? A new study weighs in

How do retractions impact researchers’ career paths and collaborations?

About 46% of authors leave their publishing careers around the time of a retraction, a new study has found.
SA Memon et al/Nat Hum Behav 2025

Several studies have tackled the issue of what effect a retracted paper has on a scientist’s reputation and publication record. The answer is, by and large, it depends: The contribution the researcher made on the paper, their career stage, the field of study and the reason for the retraction all play a role.

Three researchers from New York University’s campus in Abu Dhabi wanted to  better understand how a retraction affects a scientist’s career trajectory and future collaborations. Using the Retraction Watch Database, they looked at papers retracted between 1990 and 2015, and merged that data with Microsoft Academic Graph to generate information on researchers’ pre- and post-retraction publication patterns, as well as their collaboration networks. They also looked at Altmetric scores of retractions to factor in the attention a retraction got.

From that data, they extrapolated if and when researchers with retracted papers left scientific publishing, and looked for trends in researchers’ collaboration networks before and after the retraction.

Continue reading How do retractions impact researchers’ career paths and collaborations?

Do men or women retract more? A study found the answer is … complicated 

A new study compares retraction rates between men and women.
Pexels

Longtime Retraction Watch readers know the scientists on our Leaderboard have changed over the years. But one characteristic has remained relatively constant: There are few women on that list – in fact, never rarely more than one at a time.

So when a recent paper dove into whether retraction rates vary by the gender of the authors, we were curious what the authors found.

The team, from Sorbonne Study Group on Methods of Sociological Analysis (GEMASS) in Paris, sampled 1 million articles from the OpenAlex database, then referenced the Retraction Watch database to compare against their sample. 

Continue reading Do men or women retract more? A study found the answer is … complicated 

‘A threat to the integrity of scientific publishing’: How often are retracted papers marked that way?

Caitlin Bakker

How well do databases flag retracted articles?

There has been a lot of interest recently in the quality of retraction notices and notifications, including new guidelines from the National Information Standards Organization (NISO; our Ivan Oransky was a member of the committee) and a new study which Ivan and our Alison Abritis joined.

In another new paper, “Identification of Retracted Publications and Completeness of Retraction Notices in Public Health,” a group of researchers set out to study “how clearly and consistently retracted publications in public health are being presented to researchers.”

Spoiler alert: Not very.

We asked corresponding author Caitlin Bakker, of the University of Regina — who also chaired the NISO committee — some questions about the findings and their implications.

Continue reading ‘A threat to the integrity of scientific publishing’: How often are retracted papers marked that way?

‘The notices are utterly unhelpful’: A look at how journals have handled allegations about hundreds of papers

Andrew Grey

Retraction Watch readers may recall the names Jun Iwamoto and Yoshihiro Sato, who now sit in positions 3 and 4 of our leaderboard of retractions, Sato with more than 100. Readers may also recall the names Andrew Grey, Alison Avenell and Mark Bolland, whose sleuthing was responsible for those retractions. In a recent paper in in Accountability in Research, the trio looked at the timeliness and content of the notices journals attached to those papers. We asked them some questions about their findings.

Retraction Watch (RW): Your paper focuses on the work of Yoshihiro Sato and Jun Iwamoto. Tell us a bit about this case.

Continue reading ‘The notices are utterly unhelpful’: A look at how journals have handled allegations about hundreds of papers

How can universities and journals work together better on misconduct allegations?

Elizabeth Wager

Retractions, expressions of concern, and corrections often arise from reader critiques sent by readers, whether those readers are others in the field, sleuths, or other interested parties. In many of those cases, journals seek the input of authors’ employers, often universities. In a recent paper in Research Integrity and Peer Review, longtime scientific publishing consultant Elizabeth Wager and Lancet executive editor Sabine Kleinert, writing on behalf of the Cooperation & Liaison between Universities & Editors (CLUE) group, offer recommendations on best practice for these interactions. Here, they respond to several questions about the paper.

Retraction Watch (RW): Many would say that journals can take far too long to act on retractions and other signaling to readers about problematic papers. Journals (as well as universities) often point to the need for due process. So what would a “prompt” response look like, as recommended by the paper?

Continue reading How can universities and journals work together better on misconduct allegations?

What happened when a group of sleuths flagged more than 30 papers with errors?

Jennifer Byrne

Retraction Watch readers may recall the name Jennifer Byrne, whose work as a scientific sleuth we first wrote about four years ago, and have followed ever since. In a new paper in Scientometrics, Byrne, of New South Wales Health Pathology and the University of Sydney, working along with researchers including Cyril Labbé, known for his work detecting computer-generated papers, and Amanda Capes-Davis, who works on cell line identification, describe what happened when they approached publishers about errors in 31 papers. We asked Byrne several questions about the work.

Retraction Watch (RW): You focused on 31 papers with a “specific reagent error.” Can you explain what the errors were?

Continue reading What happened when a group of sleuths flagged more than 30 papers with errors?

Journal editor breaks protocol to thank an anonymous whistleblower

As Retraction Watch readers may recall, we’ve been highlighting — and championing — the work of anonymous whistleblowers throughout the 10-year history of the blog. Our support for such anonymity, however, is not universally shared. 

In 2011, for example, in our column at Lab Times (unfortunately no longer online), we wrote:

Continue reading Journal editor breaks protocol to thank an anonymous whistleblower

“[H]ow gullible reviewers and editors…can be”: An excerpt from Science Fictions

We’re pleased to present an excerpt from Stuart Ritchie’s new book, Science Fictions: How Fraud, Bias, Negligence, and Hype Undermine the Search for Truth.

One of the best-known, and most absurd, scientific fraud cases of the twentieth century also concerned transplants – in this case, skin grafts. While working at the prestigious Sloan-Kettering Cancer Institute in New York City in 1974, the dermatologist William Summerlin presaged Paolo Macchiarini—an Italian surgeon who in 2008 published a (fraudulent) blockbuster paper in the top medical journal the Lancet on his successful transplant of a trachea—by claiming to have solved the transplant-rejection problem that Macchiarini encountered. Using a disarmingly straightforward new technique in which the donor skin was incubated and marinated in special nutrients prior to the operation, Summerlin had apparently
grafted a section of the skin of a black mouse onto a white one, with no immune rejection. Except he hadn’t. On the way to show the head of his lab his exciting new findings, he’d coloured in a patch of the white mouse’s fur with a black felt-tip pen, a deception later revealed by a lab technician who, smelling a rat (or perhaps, in this case, a mouse), proceeded to use alcohol to rub off the ink. There never were any successful grafts on the mice, and Summerlin was quickly fired.

Continue reading “[H]ow gullible reviewers and editors…can be”: An excerpt from Science Fictions

Journals are failing to address duplication in the literature, says a new study

Mario Malički

How seriously are journals taking duplicated work that they publish? That was the question Mario Malički and colleagues set out to answer six years ago. And last month, they published their findings in Biochemia Medica.

The upshot? Journals have a lot of work to do. Continue reading Journals are failing to address duplication in the literature, says a new study