Would you consider a donation to support Weekend Reads, and our daily work?
The week at Retraction Watch featured:
- PLOS and scientist appear close to settling lawsuit over expression of concern
- Cancer researcher with nine retractions says he’ll take publisher to court
- Publisher looking into COVID vaccine paper with ‘serious flaws’
- History repeats itself: Diabetes researcher gets four expressions of concern in journal he once sued
- Lancet retracts two more papers by convicted surgeon Paolo Macchiarini
Our list of retracted or withdrawn COVID-19 papers is up to well over 350. There are more than 43,000 retractions in The Retraction Watch Database — which is now part of Crossref. The Retraction Watch Hijacked Journal Checker now contains well over 200 titles. And have you seen our leaderboard of authors with the most retractions lately — or our list of top 10 most highly cited retracted papers? Or The Retraction Watch Mass Resignations List?
Here’s what was happening elsewhere (some of these items may be paywalled, metered access, or require free registration to read):
- “Rachel Reeves admits mistakes after being accused of plagiarism in new book.”
- “Prominent [eLife] journal editor fired for endorsing satirical article about Israel-Hamas conflict.”
- An editor resigns because of Elsevier’s failure to act on likely paper mill activity.
- “Sci-Hub presents a paradox for open access publishing.”
- “Peer colleagues slam history professor’s book for ‘systemically’ misrepresenting sources.”
- “Create PhD databases to flush out fraudsters, universities told.”
- “Lyrical liar: inside Sweden’s Macchiarini-inspired opera.”
- “Are research contributions assigned differently under the two contributorship classification systems in PLoS ONE?”
- “On hydroxychloroquine, everything has been said but much remains to be done.” A call to retract.
- “Causes for Retraction in the Biomedical Literature: A Systematic Review of Studies of Retraction Notices.”
- “Can moral case deliberation in research groups help to navigate research integrity dilemmas? A pilot study.”
- “Retractions in primary care journals (2000–2022).”
- “Microsoft fixes the Excel feature that was wrecking scientific data.” An earlier Retraction Watch guest post: Genomics has a spreadsheet problem.
- “Health journal coverage of climate change and health: a bibliometric study.“
- “PubPeer and Self-Correction of Science: Male-Led Publications More Prone to Retraction.” Have you seen the Retraction Watch Leaderboard recently?
- “[H]ow I came to discover the anomalies in Francesca Gino’s work, and what I think we can learn from this unfortunate story.”
- “Streetlight Effect in Post-Publication Peer Review: Are Open Access Publications More Scrutinized?”
- “Retraction: Laudable Self-correction or a Stigma? Negotiating the Minefield.”
- “Article retractions rates in selected MeSH term categories in PubMed published between 2010 – 2020.”
- ‘Highly Questionable:’ Investigation Casts Doubt on Cassava Alzheimer’s Drug Data.”
- “Amid bans and restrictions on their use, artificial intelligence tools are creating interest among those who see a solution to systemic peer-review woes.”
- “Scientists honoured for facing down lawsuits to reveal findings.” Nancy Olivieri and Chelsea Polis win Maddox Prizes for standing up for science.
- “Are We Having a Moral Panic Over Misinformation?”
- “Through the Secret Gate: A Study of Member-Contributed Submissions in PNAS.”
Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on Twitter, like us on Facebook, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in The Retraction Watch Database, you can let us know here. For comments or feedback, email us at [email protected].
Of some potential interest:
https://www.scientificamerican.com/article/what-happens-to-a-werewolf-on-the-moon/
Not a lot of data.
As noted, perhaps Artemis will give more information, and may lower our estimates of the prevalence of lycanthropy (I’d say something less than 1/8,000,000,000, at a guess)
“Amid bans and restrictions on their use, artificial intelligence tools are creating interest among those who see a solution to systemic peer-review woes.”
I took a look at this paper and I’m terrified at how stupid it is. There is not even the most remote attempt to state the domain of potential corrections the AI could make, and because ChatGPT operates stochastically, this is relevant to every aspect of their supposed proof. What this paper says is “if we ask ChatGPT to roll 1d6, it comes up with the same rolls as a human does at nearly the same rate as any two human dice rolls are similar.”
But what are the faces of the die? How many options for corrective feedback are there? On a paper using Methodology X, where Methodology X can go wrong only in one of 3 different ways, and did get used wrongly in way X’, an AI with no reason to weight the probability of any one of those three ways for critique will pick “you did X’ ” 33% of the time. No prizes should be awarded for this.
And then it gets worse! What they are calling similarity isn’t. Consider these two statements, provided in the paper as an example of successful AI identification of a similar point as a human reviewer under the category “ethical aspects.”
Human reviewer: “My main concern: I did not find IRB approval information on the human experiment. If there is, it should be mentioned, if not the authors should explain why it is not necessary in this case (and should be validated with the conference chairs). Also, the details of the experiment and instructions to the demonstrators should accompany the paper.”
AI reviewer: “The paper does not discuss the ethical implications of their research. While the authors’ intention is to improve the security of federated learning systems, their research could potentially be misused by malicious actors…”
This is hogwash and a danger to research integrity in the most fundamental way. Real “the map IS the terrain” hours.