Would you consider a donation to support Weekend Reads, and our daily work?
The week at Retraction Watch featured:
- Did Flint water crisis set kids back in school? Paper saying so is ‘severely flawed,’ say critics
- Exclusive: Kavli prize winner threatens to sue critic for defamation
- Scopus is broken – just look at its literature category
- Supplement maker sues critic for defamation, spurring removal of accepted abstract
- ‘Mistakes were made’: Paper by department chair earns expression of concern as more questioned
Our list of retracted or withdrawn COVID-19 papers is up past 400. There are more than 49,000 retractions in The Retraction Watch Database — which is now part of Crossref. The Retraction Watch Hijacked Journal Checker now contains more than 250 titles. And have you seen our leaderboard of authors with the most retractions lately — or our list of top 10 most highly cited retracted papers? What about The Retraction Watch Mass Resignations List — or our list of nearly 100 papers with evidence they were written by ChatGPT?
Here’s what was happening elsewhere (some of these items may be paywalled, metered access, or require free registration to read):
- “Engineering the world’s highest cited cat, Larry.” And what about Bruce Le Catt?
- “Is peer review failing its peer review?”
- “Oxford scientist accuses boss of stealing peanut allergy research.”
- A case against a lawyer in Bulgaria accused of plagiarism is not moving forward.
- “How I turned seemingly ‘failed’ experiments into a successful Ph.D.”
- “Should There Be Peer Review After Publication?”
- “In a first, botanists vote to remove offensive plant names from hundreds of species.”
- Elsevier’s head of journal development wants to help editors avoid losing their impact factors.
- “The ‘Risky Research Review Act’ would do more harm than good.”
- Researchers say “nearly half” of lab-made plasmids, a “workhorse of biology,” have defects.
- “How to fight fake papers: a review on important information sources and steps towards solution of the problem.”
- “A Tentative Venture into” NIH’s funding of replication studies.
- Researchers say their “Research Transparency Index…saves authors, students, reviewers, and editors time.”
- “Retractions in academic publishing: insights from highly ranked global universities.”
- “A study in ethics: the Chinese science detectives hard on the trail of academic misconduct.”
- “Why do Journals Continue to Publish Single-Authored Systematic Reviews?”
- “A Question of Consent”: the Human Genome Project’s “landmark paper may have misrepresented donor procedures.”
- “Court exonerates Kansas professor in China research fraud case” after 5 years.
- “Japanese chemistry institute sues archival site for hosting discontinued journal.”
- “Are reviewer scores consistent with citations?”
- A study finds the “punishment intensity for research misconduct” correlates with “authorship order,” “professional title,” and other authorship factors.
- “What To Do Once the Paper is Retracted.” Earlier, a guest post
on NISO’s recommended practice. - “35 papers of a scientist from CSIR’s Indian Institute of Toxicology Research retracted.”
- What’s the origin of “publish or perish?”
- Scientists “determined to change” the scam of journal publishing by creating a not-for-profit journal.
- “Elemental analysis under scrutiny again as competition raises accuracy questions.”
- “‘People don’t want to talk’: The taboo ‘zombie’ problem in medical science.”
- “DNA barcoding reveals a taxonomic fraud”: Doubts about the identity of scarab species Propomacrus muramotoae.
- “This Reference Does Not Exist”: Are large language models accurate when generating citations?
- “Publishing is Stressful: What Can We Do About It?”
- “Six academics sacked for research paper fraud,” 8 more being investigated.
- New term: “Bothorship.” And “botshit.”
Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on Twitter, like us on Facebook, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].
This is such a bizarre world. I clicked on the “worthwhile reads elsewhere” link to, “Researchers say their “Research Transparency Index…” and the very first reference in the article doesn’t appear to exist. Plausible sounding, non-existent references are a hallmark of AI bot writing.
The article “ The research transparency index” in the Elsevier journal Leadership Quarterly leads with a citation to D.Gelles (2023) which is “When professors cheat: A Harvard star is accused and a culture is questioned. The New York Times (2023).” I thought the citation was odd, and having followed that case closely, searched for it. The NYT does have a correspondent with that name, who writes on climate, but I could find no NYT article by that title or anything on the topic by Gelles.
Right, Chris. This article is supposed to be read alongside another recommended read further down the list:
“This Reference Does Not Exist”: Are large language models accurate when generating citations?
For further readings, do check out the last item on the list:
“New term: “Bothorship.” And “botshit.”
Our favorite weekend reads are gradually being enriched by AI-written articles. The following is from the previous weekend:
“Research integrity in the era of artificial intelligence: Challenges and responses.” (AI-74%, mix-26%, human-0%).
Yes, the fact that LLMs come up with truthy-sounding, made up references is a well known problem. However, for Herman Aguinis and co-authors to apparently use LLMs to help writing their “Research Transparency Index” article is both rich and misdemeanor scientific misconduct. In the days before widespread availability of LLMs, the analogous ethical violation would have been to lift references from someone else’s writings and add them into their article without bothering to read the original sources or even to check their accuracy.
But “Our favorite weekend reads are gradually being enriched by AI-written articles.” Enriched? Really? More like adulterated.