The first of three themes for next year’s World Conference on Research Integrity will be the risks and benefits of artificial intelligence for research integrity. In an ironic and possibly predictable turn of events, the conference has received “an unusually large proportion” of off-topic abstracts that show signs of being written by generative AI.
The call for abstracts for the conference, set for May in Vancouver, closed a month ago. Last week, peer reviewers received an email with “URGENT” in the subject line.
“If you haven’t already reviewed the 9th WCRI abstracts that have been allocated to you, please take note of the following,” the email read. “We’ve received several signals that an unusually large proportion of the abstracts are completely off-topic and might have been written by some form of generative AI.”
We reached out to the conference co-chairs to find out how many abstracts the conference received, how many seem to be AI-generated, and other details. Lex Bouter, professor emeritus of methodology and integrity at Vrije Universiteit Amsterdam, declined to answer specific questions while the team sorts out the issue. He provided a statement nearly identical to the text emailed to peer reviewers.
“Many of these abstracts seem to have single authors and unusual affiliations (perhaps fake),” the email stated. “This seems to be a new phenomenon we’re experiencing at WCRI, which may also partly explain why we received so many abstracts.”
The email noted that conference organizers have checked for plagiarism in abstracts since finding several cases of it in submissions to the 6th WCRI in 2019. Among abstracts for the 2026 conference, the plagiarism software Copyleaks found a few cases of plagiarism — and it indicated “a substantial amount” showed signs of generative AI (GAI) use.
“We believe many applicants probably used GAI for language and grammar polishing, and we believe that to be acceptable,” Bouter said by email. “But among the submissions there are a proportion that are clearly off-topic and low quality that look like they were generated by GAI.”
As in past years with plagiarized abstracts, almost all of those flagged this year were submitted by authors who also applied for travel grants, the email to reviewers stated.
“Consequently, we intend to further examine abstracts with AI scores exceeding 20% that will likely be accepted based on average review scores and are associated with travel grant applications,” Bouter said. “We will subsequently reject the abstracts (and the travel grant application) for which we believe that unacceptable GAI use has occurred.”
The organizers recommended that reviewers give the lowest score to abstracts that are completely off-topic. The review process is set to wrap up next week.
Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on X or Bluesky, like us on Facebook, follow us on LinkedIn, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].
