
Happy 2026! We’re excited to bring you the first Weekend Reads of the new year.
The week at Retraction Watch featured:
- The BMJ retracts clinical trial for ‘severe’ discrepancies in randomization
- Cheers to 2025: In which Retraction Watch turned 15, and The Center For Scientific Integrity really became a center
- Data lost in a flood? The excuse checks out.
In case you missed the news, the Hijacked Journal Checker now has more than 400 entries. The Retraction Watch Database has over 63,000 retractions. Our list of COVID-19 retractions is up over 460, and our mass resignations list has 47 entries. We keep tabs on all this and more. If you value this work, please consider showing your support with a tax-deductible donation. Every dollar counts.
Here’s what was happening elsewhere (some of these items may be paywalled, metered access, or require free registration to read):
- “Is ‘open science’ delivering benefits? Major study finds proof is sparse.”
- “Korea University Investigates Research Misconduct” in papers by politician’s daughter amid allegations of preferential treatment.
- “As researchers outsource literature reviews that once took months to AI tools or leave the task of coding entirely to autonomous systems, they face the risk of being deprived of the tacit knowledge inherent in the scientific process.”
- “Pressure to publish may encourage article retractions, experts say.”
- “All these indexing debates share a common thread: the habit of reducing a complex ecosystem of science to a single indicator.”
- “External oversight could push journals and publishers to work harder to reduce integrity issues that are harming the scientific literature.”
- “Prominent environmental health journal disappears, but it’s in transition.”
- “A bibliography of genAI-fueled research fraud from 2025,” from a research librarian.
- “Science-wide mapping and ranking of institutions based on affiliated authors.”
- Professor alleges college fired him “for complaining about entitlement, plagiarism.”
- “Only a few archaeologists” have contributed to the debate on how to fix broken peer review systems, “but more should, since different types of evaluation invariably suit different research communities.”
- “One might imagine that a string of scandals like this would trigger some institutional soul-searching. Yet each episode has come and gone with barely a flicker of concern.”
- A Lund University professor blames a copy/paste error for fake references in a tenure assessment.
- “Journal ratings changes: Implications for author diversity and research characteristics.”
- “Sex bias in peer review and citation practices: Implications for research evaluation.”
- Communicable podcast: “Peer review is broken,” featuring Melinda Baldwin of the University of Maryland and Serge Horbach of Radboud University.
- “Fired Stanford researcher gets probation for altering data with insults like ‘doctor too stupid.’“
- “Just as economics is about how incentives work in the marketplace for goods and services, metascience is about how incentives work in the marketplace for ideas and truth-seeking. And these incentives play out at the same micro- and macro-levels.”
- “Identifying National, Institutional and Disciplinary Sites of Probable Predatory Publishing.”
- “UK publishing deals with ‘big five’ hailed as ‘key milestone.’”
- “Peer-reviewed by human experts: AI failed in key steps to generate a scoping review on the neural mechanisms of cross-education.”
- “The effect of using ORCID iD on improving the visibility and retrieval of Arab University publications.”
- “Opening Pandora’s box: Developing reviewer rhetorical sensitivity through retracted articles.”
- “Evaluating the Use of Large Language Models as Synthetic Social Agents in Social Science Research.”
- And this week in “oopsie”: “The published paper incorrectly listed “Nelson Mandela” as a co-author, while “Nelson Mandela African Institution of Science and Technology” is an affiliate institution.”
Upcoming Talks
- “Maintaining Integrity in Peer-Reviewed Publications,” Jefferson Anesthesia Conference 2026, featuring our Adam Marcus (February 2, Big Sky, Montana)
- “Scientific Integrity Challenged by New Editorial Practices,” featuring our Ivan Oransky (February 12, virtual)
Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on X or Bluesky, like us on Facebook, follow us on LinkedIn, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].
Many of the points linked are closely related. The depressing take on gen-AI fueled fraud feeds into the peer review crisis. Almost on daily basis I now receive requests to review blatant slop. At the same time, unjustified desk rejections for legitimate papers have increased together with publication delays that are now measured in years, not months. Review quality too is a distant memory, and based on what I am asked to review, I don’t really blame peers. Finally, I have a hard time seeing how the big-5’s APC-journals differ from “traditional” predatory publishers.