Medical journal publishes a letter on AI with a fake reference to itself

We’ve seen all kinds of articles that got published despite having references that don’t exist. But this was a new one: a paper with a made-up reference to the journal in which it appears.

While nonexistent references can indicate the use of a large language model in generating text, the authors maintain they used AI according to the journal’s guidelines. 

The letter to the editor, published in December 2024 in Intensive Care Medicine, explored ways AI could help clinicians monitor blood circulation in patients in intensive care units. The 750-word letter included 15 references.

We were able to locate the cited papers for five of the references, although one of them had an error in the publication year, and another had a different author order, page numbers and slight variations in the title. For the remaining 10, we couldn’t find articles with matching titles, either in the journal cited or any journal at all. 

Reference 11 was to a paper on integrating AI-driven hemodynamic monitoring in intensive care, published in Intensive Care Medicine. We could find no such article in the journal, nor any article in the journal by the listed authors. 

Intensive Care Medicine is the journal of the European Society of Intensive Care Medicine and published by Springer Nature. On November 4, the publisher added an editor’s note to the article stating “concerns regarding the presence of nonexistent references have been raised.” 

On November 29 the editor-in-chief retracted the letter, according to the notice. “The authors have stated that these non-existent references resulted from the use of generative AI to convert the PubMed IDs of cited articles into a structured reference list,” the notice states. “As a result, the Editor-in-Chief no longer has confidence in the reliability of the contents of the article.”

The retraction notice also states “the peer review process had not been carried out in accordance with the journal’s editorial policies.”

Jordan Schilling, publishing director, medicine, for Springer Journals, told us the publisher first learned of concerns with the article in January 2025, but did not specify whether those concerns were with the references or peer review process. “Whilst we endeavour to complete our investigations as swiftly and efficiently as possible, we do with care to ensure the integrity of the scientific record,” Schilling said. “However, we appreciate that delays to investigations can be frustrating, and we apologise for the length of time taken in this case.”

Alexander Vlaar, professor of intensive care medicine at Amsterdam University Medical Center, and the corresponding author on the letter, referred us to his institution’s press office. “The content of the letter was original; no AI was used beyond what is allowed by the publisher,” press officer Edith Verheul told us by email. “In the retraction comment by the editor and publisher it is confirmed that these inaccuracies were the result of a formatting error caused by the permitted use of AI. For this reason, publishing a correction would have been a more appropriate response. However, this is a decision made by the editorial board, as they indicated that the review process was not properly followed. This, however, is beyond the authors’ control.”

The journal’s author guidelines do specify that large language models may be used for copy editing — and also note that the authors are ultimately responsible for the content:

The use of an LLM (or other AI-tool) for “AI assisted copy editing” purposes does not need to be declared. In this context, we define the term “AI assisted copy editing” as AI-assisted improvements to human-generated texts for readability and style, and to ensure that the texts are free of errors in grammar, spelling, punctuation and tone. These AI-assisted improvements may include wording and formatting changes to the texts, but do not include generative editorial work and autonomous content creation. In all cases, there must be human accountability for the final version of the text and agreement from the authors that the edits reflect their original work.

While far from complete, the list of publications with fake references includes an article on whistleblowing in an ethics journal (which has now been retracted), an article with a reference to a nonexistent article by one of our cofounders, and an entire book full of fake citations. The source of the issue is often hallucinations generated by LLMs like ChatGPT


Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on X or Bluesky, like us on Facebook, follow us on LinkedIn, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].


Processing…
Success! You're on the list.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.