Weekend reads: Science publishing a ‘hot mess’; AI microscopy ‘indistinguishable’ from real; social media as a bellwether for retractions

Dear RW readers, can you spare $25?

The week at Retraction Watch featured:

Our list of retracted or withdrawn COVID-19 papers is up past 500. There are more than 60,000 retractions in The Retraction Watch Database — which is now part of Crossref. The Retraction Watch Hijacked Journal Checker now contains more than 300 titles. And have you seen our leaderboard of authors with the most retractions lately — or our list of top 10 most highly cited retracted papers? What about The Retraction Watch Mass Resignations List?

Here’s what was happening elsewhere (some of these items may be paywalled, metered access, or require free registration to read):


Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on X or Bluesky, like us on Facebook, follow us on LinkedIn, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].


Processing…
Success! You're on the list.

4 thoughts on “Weekend reads: Science publishing a ‘hot mess’; AI microscopy ‘indistinguishable’ from real; social media as a bellwether for retractions”

    1. Thanks for the link!
      The article mentioned “anonymous allegations of research misconduct against the institute’s CEO” … “which were made anonymously online.” is most likely referring to PubPeer. If you search for the mentioned Matthew Kiernan in PubPeer it renders 49 results https://pubpeer.com/search?q=Matthew+Kiernan
      PS. The comments are not all anonymous (at least one includes the well-known Elisabeth M Bik ) and as far as I can see all refer to questions about the figures and the statistics used.

      1. “Who is making the allegation?” is a very reasonable question when the allegation refers to non-public information. But I just read ~20 of these, and none do. They are entirely quotations of statistics from the paper and queries about those statistics, or images from the paper and queries about those images. Anyone with access to the paper (which a journal or institutional investigation surely can get) can check for themselves. If the allegations are correct, it doesn’t matter if they were typed by an army of monkeys with typewriters!
        Also there is an extremely clear pattern, which is that p values > 0.05 are “significant” or “trending to significance” when convenient (up to 0.2 in some cases), and when it’s not convenient, even substantially lower p values are “no difference” (down to 0.06 in some cases). If the person judging the paper is not good with statistics, it should be possible to get someone to help: this does not require detailed analysis, nor access to the underlying data.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.