Weekend reads: The world’s most cited cat; ‘Is peer review failing its peer review?’; Oxford prof accused of stealing research

Would you consider a donation to support Weekend Reads, and our daily work?

The week at Retraction Watch featured:

Our list of retracted or withdrawn COVID-19 papers is up past 400. There are more than 49,000 retractions in The Retraction Watch Database — which is now part of Crossref. The Retraction Watch Hijacked Journal Checker now contains more than 250 titles. And have you seen our leaderboard of authors with the most retractions lately — or our list of top 10 most highly cited retracted papers? What about The Retraction Watch Mass Resignations List — or our list of nearly 100 papers with evidence they were written by ChatGPT?

Here’s what was happening elsewhere (some of these items may be paywalled, metered access, or require free registration to read):

Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on Twitter, like us on Facebook, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].

Processing…
Success! You're on the list.

3 thoughts on “Weekend reads: The world’s most cited cat; ‘Is peer review failing its peer review?’; Oxford prof accused of stealing research”

  1. This is such a bizarre world. I clicked on the “worthwhile reads elsewhere” link to, “Researchers say their “Research Transparency Index…” and the very first reference in the article doesn’t appear to exist. Plausible sounding, non-existent references are a hallmark of AI bot writing.

    The article “ The research transparency index” in the Elsevier journal Leadership Quarterly leads with a citation to D.Gelles (2023) which is “When professors cheat: A Harvard star is accused and a culture is questioned. The New York Times (2023).” I thought the citation was odd, and having followed that case closely, searched for it. The NYT does have a correspondent with that name, who writes on climate, but I could find no NYT article by that title or anything on the topic by Gelles.

  2. Right, Chris. This article is supposed to be read alongside another recommended read further down the list:
    “This Reference Does Not Exist”: Are large language models accurate when generating citations?
    For further readings, do check out the last item on the list:
    “New term: “Bothorship.” And “botshit.”

    Our favorite weekend reads are gradually being enriched by AI-written articles. The following is from the previous weekend:
    “Research integrity in the era of artificial intelligence: Challenges and responses.” (AI-74%, mix-26%, human-0%).

  3. Yes, the fact that LLMs come up with truthy-sounding, made up references is a well known problem. However, for Herman Aguinis and co-authors to apparently use LLMs to help writing their “Research Transparency Index” article is both rich and misdemeanor scientific misconduct. In the days before widespread availability of LLMs, the analogous ethical violation would have been to lift references from someone else’s writings and add them into their article without bothering to read the original sources or even to check their accuracy.

    But “Our favorite weekend reads are gradually being enriched by AI-written articles.” Enriched? Really? More like adulterated.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.