Dear RW readers, can you spare $25?
The week at Retraction Watch featured:
Our list of retracted or withdrawn COVID-19 papers is up past 450. There are more than 50,000 retractions in The Retraction Watch Database — which is now part of Crossref. The Retraction Watch Hijacked Journal Checker now contains more than 300 titles. And have you seen our leaderboard of authors with the most retractions lately — or our list of top 10 most highly cited retracted papers? What about The Retraction Watch Mass Resignations List — or our list of nearly 100 papers with evidence they were written by ChatGPT?
- “Researchers have questions about how so many authors have racked up a large number of citations so quickly, although some of those authors are honest overachievers.”
- Journal “refuses to publish paper” that “might upset victims’ parents.”
- “Recent encounters with atom-thin salami slicing. . . the practice of taking a set of research findings and splitting them up into as many publishable papers as possible.”
- What happened when an economist asked for the data underlying a study.
- “Research found that 74 per cent of academics had experienced online harms as a result of sharing research publicly.”
- “An analysis of availability and implications of unlabeled retracted articles on Sci-Hub.”
- “IEEE Continues to Strengthen Its Research Integrity Process,” say executives at the publisher.
- “Five things you should do as a journal editor to support social justice.”
- “Using mixed methods research to study research integrity: Current status, issues, and guidelines.”
- “Scaffolding decision spaces in decision support systems: Using plagiarism screening software in editorial offices.”
- “A proposed framework to address metric inflation in research publications.”
- “Reducing the spread of retracted pain research.”
- “Meeting abstracts inflate the JCR impact factor,” researchers say.
- Researchers explore “the causes and consequences of data management errors.”
- Researchers introduce an “analytical framework adaptable across various fields and disciplines to evaluate potential risks from fraudulence.”
- “Apply the legal ‘true malice’ principle to protect” sleuths, says researcher.
- “Scientific ‘fraud hunters’ by profession.”
- Researchers found no evidence of “gender bias during the editorial decision-making process” for peer-reviewed papers.
- “AI won’t remove the need for human academic editing any time soon.”
- “NIH launches initiative to double check biomedical studies.”
- Science paper retracted by authors: All but one figure lacked “supporting data.”
- “Health experts can cause health scares, and not admit it.” A link to our black plastic coverage.
Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on Twitter, like us on Facebook, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].
“ Research found that 74 per cent of academics had experienced online harms as a result of sharing research publicly.”
That 74% effect size seemed improbably large and I looked up the paper. It showed me there is an immense gulf between “academic” work in the humanities and STEM. To reach this conclusion, Hannah Yellin and Laura Chaney surveyed 85 academics across UK institutions and disciplines and interviewed 13 in depth. No mention of what those disciplines were or how survey targets were selected other than from cross-disciplinary mailing lists and self-selection from social media. No mention of what the survey return rate was. The authors specialty is words; they don’t do numbers. Not a single data table. To me their methods descriptions are incomprehensible word salad, whereas I suspect they would just consider me to be uneducable. What a gulf.
https://radar.brookes.ac.uk/radar/items/a6cee93c-046c-4574-8e09-cd27c27edc1e/1/
The argument in the Lucy Letby paper that convictions like that shouldn’t be based on statistical evidence is incorrect. Convictions on the basis of “beyond reasonable doubt” are in fact using a statistical argument, that the probability that the defendant is not guilty is nonzero very unlikely. Even if the probability isn’t quantified.
In cases where DNA matches and fingerprints are used, the probability IS quantified. For ex, they often express a DNA match as an “x million to 1” chance of false positive.