Springer Nature flags paper with fabricated reference to article (not) written by our cofounder

Update, Nov. 24, 2025, 5:48 p.m. UTC: This story was updated to add comment from Mohammad Abdollahi, the editor-in-chief of the journal and last author of the paper.


Tips we get about papers and books citing fake references have skyrocketed this year, tracking closely with the rise of ChatGPT and other generative large language models. One in particular hit close to home: A paper containing a reference to an article by our cofounder Ivan Oransky that he did not write.

The paper with the nonexistent reference, published November 13 in DARU Journal of Pharmaceutical Sciences, criticizes platforms for post-publication peer review — and PubPeer specifically — as being vulnerable to “misuse” and “hyper-skepticism.” Five of the paper’s 17 references do not appear to exist, three others have incorrect DOIs or links, and one has been retracted. 

One of the fabricated references credits our cofounder Ivan Oransky with a nonexistent article, “A new kind of watchdog is shaking up research,” purportedly published in Nature in 2019. 

Continue reading Springer Nature flags paper with fabricated reference to article (not) written by our cofounder

AI unreliable in identifying retracted research papers, says study

LLMs don’t reliably identify retracted papers, a new study finds. (Image: DALL-E)

Large language models should not be used to weed out retracted literature, a study of 21 chatbots concludes. Not only were the chatbots unreliable at correctly identifying retracted papers, they spit out different results when given the same prompts.

The “very simple study,” as lead author Konradin Metze called it, used LLM chatbots like ChatGPT, Copilot, Gemini, and others to see whether they would successfully identify retracted articles in a list of references.

Metze and colleagues compiled a list of 132 publications. The list comprised the 50 most cited, retracted papers by Joachim Boldt, a prolific German researcher who also sits at the top of the Retraction Watch Leaderboard. Another 50 were Boldt’s most cited non-retracted papers. The rest were works by other researchers with the last name “Boldt” and first initial of “J.” The study authors prompted each chatbot to indicate which of the listed references had been retracted.

Continue reading AI unreliable in identifying retracted research papers, says study

Research integrity conference hit with AI-generated abstracts

The first of three themes for next year’s World Conference on Research Integrity will be the risks and benefits of artificial intelligence for research integrity. In an ironic and possibly predictable turn of events, the conference has received “an unusually large proportion” of off-topic abstracts that show signs of being written by generative AI.

The call for abstracts for the conference, set for May in Vancouver, closed a month ago. Last week, peer reviewers received an email with “URGENT” in the subject line.

“If you haven’t already reviewed the 9th WCRI abstracts that have been allocated to you, please take note of the following,” the email read. “We’ve received several signals that an unusually large proportion of the abstracts are completely off-topic and might have been written by some form of generative AI.”

Continue reading Research integrity conference hit with AI-generated abstracts

AI-Reddit study leader gets warning as ethics committee moves to ‘stricter review process’

University of Zurich

The university ethics committee that reviewed a controversial study that deployed AI-generated posts on a Reddit forum made recommendations the researchers did not heed, Retraction Watch has learned. 

The principal investigator on the study has received a formal warning, and the university’s ethics committees will implement a more rigorous review process for future studies, a university official said.

As we reported yesterday, researchers at the University of Zurich tested whether a large language model, or LLM, can persuade people to change their minds by posting messages on the Reddit subforum r/ChangeMyView (CMV). The moderators of the forum notified the subreddit about the study and their interactions with the researchers in a post published April 26

Continue reading AI-Reddit study leader gets warning as ethics committee moves to ‘stricter review process’