Would you consider a donation to support Weekend Reads, and our daily work?
The week at Retraction Watch featured:
- Author of paper on COVID-19 and jade amulets sues employer for ‘mental anguish,’ discrimination
- Colombia drug regulator halts clinical research at US-funded facility
- Wiley journal editors resign en masse, fired chief editor speaks
- Outcry over ‘terminal anorexia’ response letter prompts retraction
Our list of retracted or withdrawn COVID-19 papers is up to more than 350. There are now 42,000 retractions in our database — which powers retraction alerts in Edifix, EndNote, LibKey, Papers, and Zotero. The Retraction Watch Hijacked Journal Checker now contains 200 titles. And have you seen our leaderboard of authors with the most retractions lately — or our list of top 10 most highly cited retracted papers?
Here’s what was happening elsewhere (some of these items may be paywalled, metered access, or require free registration to read):
- “There’s far more scientific fraud than anyone wants to admit.”
- “How I tried to get a paper that I own retracted.”
- “So, is this fraud or what? Shoveling scientific bullshit into different buckets.”
- “Ethical decision-making and role conflict in managing a scientific laboratory.”
- “Concerns raised over autism prediction paper.”
- “Tessier-Lavigne debacle underlines case for more transparent authorship.”
- “Beware of misinterpreted chemical shifts and misquoted references.”
- “Plagiarism by academics is serious. Any excuses had better be good.”
- “Artificial-intelligence search engines wrangle academic literature.”
- “Publishers seek protection from AI mining of academic research.”
- “Medical advances typically begin with a study. Now, universities are struggling to afford them.”
- “UK needs ‘Institute for Scientific Replicability’, says thinktank.”
- “Authorship Disputes in Scholarly Biomedical Publications and Trust…”
- “Study proposing microbiome-based cancer diagnostic comes under fire.” And “‘Major errors’ alleged in landmark study that used microbes to identify cancers.”
- “Aga Khan University don urges researchers to stick to ethics.”
- “Raising concerns on questionable ethics approvals – a case study of 456 trials” from IHU-Marseille.
- “MIT raised millions for startup it shut down after accusations of research misconduct.
- “ENRIO publishes the Handbook on Whistleblower Protection in Research.”
- “The importance of a good review(er) for educational technology research.”
- How to find and manage “attempted fraud during an online randomised trial.”
- “Winning the war against research misconduct.”
- “How people decide who is correct when groups of scientists disagree.”
- “From Bogus Journals to Predatory Universities: The Evolution of the Russian Academic Sphere Within the Predatory Settings of the State.”
- “Fraud holds back research.”
- “Is it defamation to point out scientific research fraud?”
- “The result is a system that produces sluggish responses to research integrity issues that neither engender trust among the public nor among those skeptical of the ability of these entities to promote research integrity.”
- “AI can crack double blind peer review – should we still use it?”
- “Quality metrics in academia: time to revisit the rules?”
- “AI poses risks to research integrity, universities say.”
- “Push for science watchdog as inquiry finds ‘disincentive’ for self-regulation.”
- A brief history of a journal’s Wall of Shame.
Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on Twitter, like us on Facebook, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].
I am convinced that academia is designed to protect people who commit misconduct. It’s not about the competence or incompetence of any particular people or office, but a high level failure to design systems that prevent and respond to misconduct.
Take for example these two step institutional processes that are designed to 1. Establish reliability of the data then 2. Attribute fault/establish misconduct. These are really hard to separate. It is far more efficient to do it all at once. But let’s say an institution has completed a decent internal investigation. The journal will then repeat the process because they are loathe to rely on institutional findings.
Add in problems like underresourcing of investigations teams and litigious researchers and of course you can never establish misconduct. In order to prove misconduct you actually need to do it at least 3 times, not including appeals. It’s easier to be convicted of murder than research misconduct.
This is generally the case in any well-established field. At the end of the day, protecting those who commit misconduct protects a group from public distrust, especially in academia where most all funding will come from those with frankly very little scientific knowledge or understanding. Even outside the scope of funding and public opinion, tenure and connections are just as, if not more important in academia than the quality and legitimacy of one’s work. Knowing people gets your foot in the door, and once you’re there, why rock the boat lest you lose your place? That combined with the amount of excess bureaucracy in misconduct investigations you mention makes a field that is prime territory for those who commit or are willing to commit misconduct of any kind.