Weekend reads: Lawyer sues to retract Paxil paper; most retracted papers have ‘negligible scholarly impact’; the high costs of freely available research

Dear RW readers, can you spare $25?

The week at Retraction Watch featured:

Our list of retracted or withdrawn COVID-19 papers is up past 500. There are more than 60,000 retractions in The Retraction Watch Database — which is now part of Crossref. The Retraction Watch Hijacked Journal Checker now contains more than 300 titles. And have you seen our leaderboard of authors with the most retractions lately — or our list of top 10 most highly cited retracted papers? What about The Retraction Watch Mass Resignations List?

Here’s what was happening elsewhere (some of these items may be paywalled, metered access, or require free registration to read):

Upcoming Talks 

  • Future Proof Your Research With Rigor” featuring our Ivan Oransky (Sept. 8, Philadelphia)
  • Doctors’ Lounge“: An evening “examining the quality control challenges that we all face in our quest to stay current as medical practitioners” featuring our Ivan Oransky (September 29, virtual)

Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on X or Bluesky, like us on Facebook, follow us on LinkedIn, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].


Processing…
Success! You're on the list.

4 thoughts on “Weekend reads: Lawyer sues to retract Paxil paper; most retracted papers have ‘negligible scholarly impact’; the high costs of freely available research”

  1. I really dislike the assumption, in the JAMA AI in peer review article, that AI is capable of “removing the rote tasks” from peer review. There are tasks it could possibly do that human peer reviewers generally don’t (it could theoretically test that every reference is a real paper, not retracted, in a legit journal) but…. Summarizing the paper? If the reviewer doesn’t *read* the paper, they cannot review it. Perhaps it’s internally inconsistent, but the AI “fixed” that in summarization. Perhaps the devil is in the details. I reviewed one recently where it was necessary to reason about movements of the study animals in order to see what was wrong with the statistics, and that information might easily have dropped right out of the summary. I hope their safety studies lead them to the conclusion that this just should not be done.

    1. “AI as a judge” is most amenable to “AI”-generated content. What to make out of it, is up to everyone.

    2. Yeah, and the quality of “AI review” is really at the slop-level. With slop increasingly submitted as input and slop being submitted as output, why even bother with the journal business?
      With that question, I wish to send cheers to my beloved colleague who wanted details on “experimental setup” and “metrics” in a manuscript that had none.
      Yet, there are still alternatives, such as stricter identity verifications and accountability measures (cf. sloppers could slop in their own slopper venues, and everyone would be happy).

  2. ‘most retracted papers have ‘negligible scholarly impact’
    If retraction of most papers have a ‘negligible scholarly impact’
    why do the research in the first place, if removing it from journals has little/no impact ?
    Why spend the time and money on retracting of ‘negligible scholarly’ papers, if that research had no impact in the first place ?

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.