One in 277 PubMed-indexed papers in 2026 shows fabricated references, says analysis

Figure from correspondence to The Lancet by Maxim Topaz and colleagues.

Fabricated citations in the biomedical literature have increased 12-fold in two years, according to an audit of nearly 2.5 million papers published as a letter to The Lancet today. 

The analysis of articles indexed in PubMed found that about one in 277 papers published in the first seven weeks of 2026 referenced a paper that didn’t exist. That was a jump from 2025’s rate of one in 458 and 2023’s one in 2,828. The researchers, led by Maxim Topaz of Columbia University’s Data Science Institute, used AI to “distinguish genuine fabrications from formatting discrepancies such as informally abbreviated titles.”

Topaz’s group located the sharpest increase in hallucinated references in mid-2024, which they note coincided with the rise of AI writing tools. The findings come as Nature reported last month that tens of thousands of publications from 2025 “might include invalid references generated by AI.” Retraction Watch has seen its fair share of reports of hallucinated citations generated by LLMs like ChatGPT.

Continue reading One in 277 PubMed-indexed papers in 2026 shows fabricated references, says analysis

NEJM retracts case study for AI-manipulated imagery

An “Images in Clinical Medicine” item in the New England Journal of Medicine has been retracted after the authors acknowledged using AI to alter the photo. Y. Wang, X. Mu/© The New England Journal of Medicine (2026).

The New England Journal of Medicine has retracted a clinical image with a picture the authors admit was manipulated with artificial intelligence.

The short piece, published April 18, reported the case of an 87-year-old man with lung damage after being exposed to a forest fire. The report included a startling image of black “casts” taken from the man’s airways, the size of which can be gauged by a tape measure at the top of the picture. 

The dramatic visual drew attention in the media (and one news outlet has already noted the retraction at the time of this writing). But the authors, Yuling Wang and Xiangdong Mu, of Daxing Teaching Hospital and Beijing Tsinghua Changgung Hospital, respectively, acknowledged having used AI to superimpose the tape ruler in the figure. 

Continue reading NEJM retracts case study for AI-manipulated imagery

Guest post: Forget pickles and ice cream. I published a fake paper on pregnancy cravings for prime numbers

Image generated by Google Gemini

I had grown weary of the constant stream and abuse of spam invitations to submit manuscripts to journals and to attend fake conferences on the other side of the world, a trend extensively studied in academia. The last straw: a solicitation from the Clinical Journal of Obstetrics and Gynecology, well outside my work in mathematics education.

Accepting the challenge, I decided to submit a deliberately nonsensical, AI-generated manuscript in response to observe how the individuals behind these supposed journals operate.

In October 2025, I wrote to someone named Henry Jackson, who had sent the article invitation in August (despite the fact that no such person is listed on the journal’s website). I sent a manuscript generated entirely by ChatGPT to test how far a publication created with zero genuine effort could go and whether there was any filtering mechanism in place to prevent a meaningless article from being published. 

Continue reading Guest post: Forget pickles and ice cream. I published a fake paper on pregnancy cravings for prime numbers

Springer Nature flags paper with fabricated reference to article (not) written by our cofounder

Update, Nov. 24, 2025, 5:48 p.m. UTC: This story was updated to add comment from Mohammad Abdollahi, the editor-in-chief of the journal and last author of the paper.


Tips we get about papers and books citing fake references have skyrocketed this year, tracking closely with the rise of ChatGPT and other generative large language models. One in particular hit close to home: A paper containing a reference to an article by our cofounder Ivan Oransky that he did not write.

The paper with the nonexistent reference, published November 13 in DARU Journal of Pharmaceutical Sciences, criticizes platforms for post-publication peer review — and PubPeer specifically — as being vulnerable to “misuse” and “hyper-skepticism.” Five of the paper’s 17 references do not appear to exist, three others have incorrect DOIs or links, and one has been retracted. 

One of the fabricated references credits our cofounder Ivan Oransky with a nonexistent article, “A new kind of watchdog is shaking up research,” purportedly published in Nature in 2019. 

Continue reading Springer Nature flags paper with fabricated reference to article (not) written by our cofounder

AI unreliable in identifying retracted research papers, says study

LLMs don’t reliably identify retracted papers, a new study finds. (Image: DALL-E)

Large language models should not be used to weed out retracted literature, a study of 21 chatbots concludes. Not only were the chatbots unreliable at correctly identifying retracted papers, they spit out different results when given the same prompts.

The “very simple study,” as lead author Konradin Metze called it, used LLM chatbots like ChatGPT, Copilot, Gemini, and others to see whether they would successfully identify retracted articles in a list of references.

Metze and colleagues compiled a list of 132 publications. The list comprised the 50 most cited, retracted papers by Joachim Boldt, a prolific German researcher who also sits at the top of the Retraction Watch Leaderboard. Another 50 were Boldt’s most cited non-retracted papers. The rest were works by other researchers with the last name “Boldt” and first initial of “J.” The study authors prompted each chatbot to indicate which of the listed references had been retracted.

Continue reading AI unreliable in identifying retracted research papers, says study

Research integrity conference hit with AI-generated abstracts

The first of three themes for next year’s World Conference on Research Integrity will be the risks and benefits of artificial intelligence for research integrity. In an ironic and possibly predictable turn of events, the conference has received “an unusually large proportion” of off-topic abstracts that show signs of being written by generative AI.

The call for abstracts for the conference, set for May in Vancouver, closed a month ago. Last week, peer reviewers received an email with “URGENT” in the subject line.

“If you haven’t already reviewed the 9th WCRI abstracts that have been allocated to you, please take note of the following,” the email read. “We’ve received several signals that an unusually large proportion of the abstracts are completely off-topic and might have been written by some form of generative AI.”

Continue reading Research integrity conference hit with AI-generated abstracts

AI-Reddit study leader gets warning as ethics committee moves to ‘stricter review process’

University of Zurich

The university ethics committee that reviewed a controversial study that deployed AI-generated posts on a Reddit forum made recommendations the researchers did not heed, Retraction Watch has learned. 

The principal investigator on the study has received a formal warning, and the university’s ethics committees will implement a more rigorous review process for future studies, a university official said.

As we reported yesterday, researchers at the University of Zurich tested whether a large language model, or LLM, can persuade people to change their minds by posting messages on the Reddit subforum r/ChangeMyView (CMV). The moderators of the forum notified the subreddit about the study and their interactions with the researchers in a post published April 26

Continue reading AI-Reddit study leader gets warning as ethics committee moves to ‘stricter review process’