Paper rejected for AI, fake references published elsewhere with hardly anything changed

One journal’s trash is another’s treasure – until a former peer reviewer stumbles across it and sounds an alarm.

In April, communications professor Jacqueline Ewart got a Google Scholar notification about a paper published in the World of Media she had reviewed, and recommended rejecting, for another journal several months earlier.

At the time, she recommended against publishing the article, “Monitoring the development of community radio: A comprehensive bibliometric analysis,” in the Journal of Radio and Audio Media, or JRAM, because she had concerns the article was written by AI. She also noticed several references, including one she supposedly wrote, were fake. 

While authors often seek to publish rejected articles elsewhere, Ewart, of Griffith University in Queensland, Australia, said she was shocked because the version that appeared in the World of Media was nearly identical to the manuscript she had seen. The authors changed one word in the title, swapping “progression” for “development.” 

In an April 7 email seen by Retraction Watch, Ewart raised concerns about the paper to Anna Gladkova, the editor in chief of World of Media, which is published by the journalism faculty at Lomonosov Moscow State University in Russia. 

Ewart emphasized the authors were informed of why their submission was rejected at JRAM, and they still had not made the changes in the published version. The next day, Alexandra Bondarenko, an editorial assistant, responded saying the journal had “every reason to undertake an investigation and re-check the mentioned article” and said the text “is being reprocessed by the anti-plagiarism system.”

On May 2, Ewart followed up and noted the paper was still available online. After that, the journal removed the article and added a note on the website: “The article has been taken off-line, as it is currently the subject of an internal inquiry. A decision will be communicated later.” 

Amit Verma, an author on the paper and a professor at Manipal University Jaipur in India, said the authors “intended to revise and improve the paper based on the feedback from the World of Media review team,” but the changes amounted to one word in the title. However, he also admitted the “title change alone was insufficient in this instance, but we did it because progression was not suitable for the journey of community radio from 2000 to 2024, so I wrote the word ‘development.’” 

Verma also told us the researchers used AI tools “to assist with certain aspects of the manuscript, such as grammar and style only. We used standard bibliometric tools for data collection and management, as well as basic proofreading tools.”

As for the incorrect references Ewart flagged, Verma said the researchers used sources “obtained from Indian institutional repositories, such as Shodhganga, and Google Scholar, where indexing and persistent availability can vary over time.” 

This explanation doesn’t seem to address a citation to an apparent 2017 article attributed to Ewart, “The impact of digital convergence on community radio engagement,” purportedly published in the Media Studies Journal. The journal’s website lists no paper with that title, and Ewart told editors in her peer review that she has not published anything about community radio since 2012.

Verma emphasized the incorrect citations were “a matter of source stability and perhaps an over-reliance on less formal databases.”

He told us World of Media has allowed the authors to submit a revised version of the manuscript for publication. The journal did not respond to our request for comment. 


Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on X or Bluesky, like us on Facebook, follow us on LinkedIn, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].


Processing…
Success! You're on the list.

9 thoughts on “Paper rejected for AI, fake references published elsewhere with hardly anything changed”

  1. When I was an undergraduate researcher on my very first project, I got a stern talking-to by the lab head which amounted to “Never cite something you have not read.”

    These folks need a similar talking-to. Evidently they did not read this article, since it *does not exist*: they had no business citing it.

    1. Nowadays, with LLMs, it is more like “never (try to) publish something you haven’t read”. During the past six months alone, I’ve reviewed at least three manuscripts with non-existent references and other pure nonsense. And not only that; I also see similar LLM-generated nonsense in some reports my fellow peer reviewers submit.

      1. Good grief — it’s bad enough that some authors are using AI/LLM to “research” their papers, but for a reviewer to do the same is inexcusable!

  2. Many years ago I wrote several pieces along the lines of Never Cite Sight Unseen. I totally failed to get this simple and obvious maxim included in Instructions to Authors in major journals. The situation is nowadays far worse since careless and giveaway mistakes in citation are usually absent in major citation databases.

    1. One of my most cited papers (692 in Google Scholar), would have had at least 200 less if those who cited it would have read it. Mainly because they would then not have used the methodology they used, as we show it is not suitable.

      Reminds me of Britta Stordal’s paper “Citations, citations everywhere, but did anyone read the paper?”, who had her paper cited in favor of a claim, while she wrote it to show that claim was wrong.

  3. Best way to cite papers? Read them first, then you know they are not a figment of either your imagination or that of confabulating LLM. Call me old-fashioned…

    1. I’m so glad to be at the tail end of my career. I’m convinced many researchers these days read only the abstract of articles. This mean they are unable to fully evaluate findings nor judge the validity of the methodology used. Of course, you’d expect the journal review process to have checked those aspects but it seems that’s not as thorough as it used to be.
      There is also much copy/pasting of material. This is often obvious throughout submissions by the randomness of formatting, citation numbering systems, capilization, and other text features.
      And there is a trend for reliance on review articles rather than primary research reports. I frustratingly even get review papers which are basically reviews of reviews!
      The use of AI is now compounding all these problems. I recently had an article in which AI was used to process and analyze data, as well as to polish the text. The authors used Elite AI versions but did verify findings themselves. Whilst the basic findings were consistent across revised submissions, the data shown in figures were not. Yet these expert users did not notice this in the stringent verification they claimed they did.
      In searching for citations for my own use and coming up with none relevant, I asked ChatGPT to do it. It found some but left out the authors’ names entirely. I asked it to provide them. It gave me names but for other articles. When pressed to provide the names for the original ones it found, it told me the articles in fact did not exist!
      If AI is doing this in the simple process of searching for and utilizing text, what will it be doing with actual data and data processing? I had the hope it would be able to analyze the massive gene-expression datasets being produced. But I currently have little trust in what it produces. It is also degrading my already rather shaky trust in submissions.

  4. Regarding the Indian Professors comments, ” a good tradesmen never blames his / hers tools “. People look to blame other sources then to look within.

Leave a Reply to MarcoCancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.