Would you consider a donation to support Weekend Reads, and our daily work? Thanks in advance.
The week at Retraction Watch featured:
- Misconduct, failure to supervise earn researchers years-long funding bans
- Is a “Wall of Shame” a good idea for journals?
- Researchers in China send a hospital “declaration” clearing them of fraud. A journal doesn’t buy it.
- Hundreds of dead rats, sloppy file names: The anatomy of a retraction
- Triple sunrise, triple sunset: Science paper retracted when it turns out a planet is a star
Our list of retracted or withdrawn COVID-19 papers is up to 219. There are more than 33,000 retractions in our database — which powers retraction alerts in EndNote, LibKey, Papers, and Zotero. And have you seen our leaderboard of authors with the most retractions lately — or our list of top 10 most highly cited retracted papers?
Here’s what was happening elsewhere (some of these items may be paywalled, metered access, or require free registration to read):
- “Our critics are right: we should have seen numerous red flags, including but not limited to the inappropriateness of a White theologian writing about the experience of Black women…” A book is pulled. More here.
- Majority of Black Americans “concerned about the potential for misconduct.”
- “Retraction Stigma and its Communication via Retraction Notices.”
- “‘The more important findings are sustained’: A diachronic perspective on the genre of the retraction notice.”
- “The big idea: should we get rid of the scientific paper?”
- “Open access is closed to middle-income countries.” An expansion of an earlier Nature comment.
- “‘Dysfunctional.’ NSF graduate fellowship review process draws criticism.”
- “Treating depression with psychedelics: red flags and FAQ.”
- “Why a judge might overturn a guilty verdict against a U.S. scientist for hiding China ties.”
- “Reporting preprints in the media during the COVID-19 pandemic.”
- “Predatory publishing 2.0: Why it is still a thing and what we can do about it.”
- “What do participants think of our research practices? An examination of behavioural psychology participants’ preferences.”
- “The cello and the nightingale: 1924 duet was faked, BBC admits.”
- “We propose to recommend a publication venue to maximize the influence of a paper.”
- “Peer reviewers equally critique theory, method, and writing, with limited effect on the final content of accepted manuscripts.”
- JAMA names a new editor in chief.
- A Swedish board clears a study of misconduct.
- “Let’s pay referees to flag mistakes.”
- “Pakistani researchers need more help to spot cloned journals.”
- A call to retract a study of “correcting” gender behavior.
- Image plagiarism: “a detection method which copes with both text and structure change.”
- “Reliability and validation of an attitude scale regarding responsible conduct in research.”
- “Why autism therapies have an evidence problem.”
- “Dark Transparency: Hyper-Ethics at Trump’s EPA.”
- Nigeria’s “Senate Passes Bill To Jail Social Media Plagiarism Offenders.”
Like Retraction Watch? You can make a one-time tax-deductible contribution by PayPal or by Square, or a monthly tax-deductible donation by Paypal to support our work, follow us on Twitter, like us on Facebook, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].
Fried’s article on experiments treating depression with psychedelics raises a really good point: how _could_ you design such a study to keep participants blind (as far as possible) or have a meaningful control group? One possibility that occurs to me is a broad-spectrum study where multiple groups get LSD, psilocybin, ketamine, THC, an SSRI, placebo, etc respectively — that could at least yield comparative results and the participants would mostly have a harder time telling which specific group they’re in. But as difficult as it probably is to set things up for just _one_ scheduled drug…
I agree, but I can see how it would probably get harder and harder to fund/process/find volunteers/etc. the more chemicals you are trying. That’s not to say that it’s a bad idea, perhaps we need fewer, larger studies.
And speaking from (limited) experience, I was once part of a study the measure the effect of methamphetamine effect on short-term m memory, and wow-boy, I certainly know which group I was in.
The Black theology retraction is worse than the headline makes it out to be — when you look into the details, it’s clear that in humanities terms there is something like a plagiarism of ideas going on. Systemic issues behind the funding and peer review. Very interesting and enlightening story.
““‘Dysfunctional.’ NSF graduate fellowship review process draws criticism.””
13,000 NSF GRFP applications reviewed by approximately 1,300 reviewers…
Almost 2200 awards…
And a Science op-ed suggesting the system is dysfunctional based on comments made by 4-5 individuals and a “chorus of voices” on Twitter. NSF programs, including GRFP, are evaluated every 4 years by an external Committee of Visitors. The composition of the committee and its findings are public information. You would think that a journalist would put in a bit of additional effort to try to learn – and report – something more substantial than just lobbing something like this article out there.