Weekend reads: UK shadow chancellor accused of plagiarism; eLife editor fired; Elsevier editor resigns because publisher ignored likely paper mill activity

Would you consider a donation to support Weekend Reads, and our daily work?

The week at Retraction Watch featured:

Our list of retracted or withdrawn COVID-19 papers is up to well over 350. There are more than 43,000 retractions in The Retraction Watch Database — which is now part of Crossref. The Retraction Watch Hijacked Journal Checker now contains well over 200 titles. And have you seen our leaderboard of authors with the most retractions lately — or our list of top 10 most highly cited retracted papers? Or The Retraction Watch Mass Resignations List?

Here’s what was happening elsewhere (some of these items may be paywalled, metered access, or require free registration to read):

Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on Twitter, like us on Facebook, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in The Retraction Watch Database, you can let us know here. For comments or feedback, email us at [email protected].

3 thoughts on “Weekend reads: UK shadow chancellor accused of plagiarism; eLife editor fired; Elsevier editor resigns because publisher ignored likely paper mill activity”

    1. As noted, perhaps Artemis will give more information, and may lower our estimates of the prevalence of lycanthropy (I’d say something less than 1/8,000,000,000, at a guess)

  1. “Amid bans and restrictions on their use, artificial intelligence tools are creating interest among those who see a solution to systemic peer-review woes.”

    I took a look at this paper and I’m terrified at how stupid it is. There is not even the most remote attempt to state the domain of potential corrections the AI could make, and because ChatGPT operates stochastically, this is relevant to every aspect of their supposed proof. What this paper says is “if we ask ChatGPT to roll 1d6, it comes up with the same rolls as a human does at nearly the same rate as any two human dice rolls are similar.”

    But what are the faces of the die? How many options for corrective feedback are there? On a paper using Methodology X, where Methodology X can go wrong only in one of 3 different ways, and did get used wrongly in way X’, an AI with no reason to weight the probability of any one of those three ways for critique will pick “you did X’ ” 33% of the time. No prizes should be awarded for this.

    And then it gets worse! What they are calling similarity isn’t. Consider these two statements, provided in the paper as an example of successful AI identification of a similar point as a human reviewer under the category “ethical aspects.”

    Human reviewer: “My main concern: I did not find IRB approval information on the human experiment. If there is, it should be mentioned, if not the authors should explain why it is not necessary in this case (and should be validated with the conference chairs). Also, the details of the experiment and instructions to the demonstrators should accompany the paper.”

    AI reviewer: “The paper does not discuss the ethical implications of their research. While the authors’ intention is to improve the security of federated learning systems, their research could potentially be misused by malicious actors…”

    This is hogwash and a danger to research integrity in the most fundamental way. Real “the map IS the terrain” hours.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.