Weekend reads: False data in Columbia rankings?; data service accused of intimidating researchers; preprint server removes ‘inflammatory’ papers

Would you consider a donation to support Weekend Reads, and our daily work? Thanks in advance.

The week at Retraction Watch featured:

Our list of retracted or withdrawn COVID-19 papers is up to 214. There are nearly 33,000 retractions in our database — which now powers retraction alerts in EndNoteLibKeyPapers, and Zotero. And have you seen our leaderboard of authors with the most retractions lately — or our list of top 10 most highly cited retracted papers?

Here’s what was happening elsewhere (some of these items may be paywalled, metered access, or require free registration to read):

Like Retraction Watch? You can make a one-time tax-deductible contribution by PayPal or by Square, or a monthly tax-deductible donation by Paypal to support our work, follow us on Twitter, like us on Facebook, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].

3 thoughts on “Weekend reads: False data in Columbia rankings?; data service accused of intimidating researchers; preprint server removes ‘inflammatory’ papers”

  1. “The former vice chancellor for research at UNC-Chapel Hill’s School of Medicine explains why he stepped down following plagiarism.” Not sure I fully buy his explanation. Copying and pasting from a website usually leads to a font that’s different from the neighboring text and if one is truly concerned about unintentional plagiarism, it would make sense to keep it like this until the copied text has been rephrased. To me, the misconduct committed in this case seems minor compared to many other cases involving data manipulation. The sloppiness leading to it is the bigger concern.

  2. The teaser text for “One in three PLOS ONE papers contained at least 1 sentence that was a direct copy from another paper.” seems a little out of place.

    The paper is about boilerplate text in statistical methods sections. Even if the whole thing were copied it wouldn’t necessarily be that bad according to the text recycling project. The authors also say that their paper is about possibly bad statistics rather than plagiarism per se:

    “ Our approach for identifying boilerplate text was not intended as a form of plagarism detection, but rather as evidence of standardised descriptions being used. For simple study designs, a boilerplate description might be adequate to promote consistency in reporting and meet reporting requirements. For example, ANZCTR sections commonly reported sample size justifications and planned analyses using intention-to-treat principles.”

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.