Would you consider a donation to support Weekend Reads, and our daily work?
The week at Retraction Watch featured:
Our list of retracted or withdrawn COVID-19 papers is up past 400. There are more than 48,000 retractions in The Retraction Watch Database — which is now part of Crossref. The Retraction Watch Hijacked Journal Checker now contains more than 250 titles. And have you seen our leaderboard of authors with the most retractions lately — or our list of top 10 most highly cited retracted papers? What about The Retraction Watch Mass Resignations List — or our new list of papers with evidence they were written by ChatGPT?
Here’s what was happening elsewhere (some of these items may be paywalled, metered access, or require free registration to read):
- “‘If Only She Could Have Been Stronger’: Miami Trial Fraud Leads to Prison, Personal Loss.”
- “Journal editors are resigning en masse: what do these group exits achieve?”
- A preprint estimates “at least 60,000 papers…were LLM-assisted.”
- “Legislator pushing anti-vax bill admits cited source was retracted from scientific journal.” Here’s that source.
- “The bottom line is that journals are not equipped with their volunteer editors and reviewers, and non-subject matter expert staff to police the world’s scientific enterprise.” A response from Anna Abalkina.
- “Publishers must own their past mistakes, accept that there will be mistakes in the future, and see retractions as best practice when it comes to dealing with published fraudulent articles.”
- “The Feds Want More Oversight of Scientific Research. Universities Are Fighting Back.”
- What should the annual budget of the U.S. Office of Research Integrity be?
- “Scientists oppose retractions for racism, sexism and fraud.”
- “It is time to stop the weaponization of the footnote.”
- “Fourth Black Female Harvard Scholar Accused of Plagiarism Amid Assault on DEI Initiatives.”
- “Academic publishing requires linguistically inclusive policies.”
- “When it is and isn’t OK to recycle text in scientific papers.”
- “[T]here is low-quality work everywhere you look, the peer-review system has long outlived its utility, and academic publishing is a dumpster fire.”
- “Have AI-Generated Texts from LLM Infiltrated the Realm of Scientific Writing?”
- “Tweeting your research paper boosts engagement but not citations.”
- “The Case for a Peer Review Market.”
- Where do mass retractions leave Frontiers’ image?
- “Nobel Prize winner’s paper to be corrected, according to co-author.”
- A decade-plus of a perplexing “withdrawal” policy at Elsevier, in a thread.
- “How rightwing groups used junk science to get an abortion case before the US supreme court.” Including now-retracted papers.
- Open retraction data: A conversation about Crossref’s acquisition of the Retraction Watch Database.
- “[A]ll the stupid things that researchers say in order to deflect legitimate criticism.”
- Fun with authorship.
- Retractions in pharmacology.
- “Sweden Says Windpipe Surgeon Can Serve Prison Term In Spain.”
- “UW-Madison’s leading DEI scholar accused of decades of research misconduct.”
- “The Top Italian Scientists and their Journal.” Featuring some familiar names from Retraction Watch.
- “In published papers, authorship attribution and disclosure of funding support have emerged, oddly and unexpectedly, as elements of government enforcement in the ‘foreign influence’ area.”
- “Analysis of Indian Retracted Publications: A Study Based on Scopus Data.”
- “Can ChatGPT predict article retraction based on Twitter mentions?”
- “Virologist who was fired from research laboratory in Canada over security threat resurfaces in China.”
- “Two studies fail to replicate ‘holy grail’ DIANA fMRI method for detecting neural activity.”
Like Retraction Watch? You can make a tax-deductible contribution to support our work, subscribe to our free daily digest or paid weekly update, follow us on Twitter, like us on Facebook, or add us to your RSS reader. If you find a retraction that’s not in The Retraction Watch Database, you can let us know here. For comments or feedback, email us at [email protected].
What is the worst nightmare of all editors-in chief? Retractions and Letters to the Editor criticizing their published papers. They would do anything to avoid these. The best misconduct police can be only journals; but they usually prefer to sweep the stuff under the rug as much as possible.
Re: using ChatGPT to *predict* when papers will be retracted. I’m not sure using unique hashtags like “pruittdata” or “creatorgate” to make a prediction is a technique that can be generalized. When a hashtag has been created and used in one or more tweets, this suggests the paper(s) in question has/have already been identified as problematic. The authors do not mention the usage of hashtags at all, and include hashtags like those above in their list of keywords. I’d point out that these are not “words.”