Before we present this week’s Weekend Reads, a question: Do you enjoy our weekly roundup? If so, we could really use your help. Would you consider a tax-deductible donation to support Weekend Reads, and our daily work? Thanks in advance.
The week at Retraction Watch featured a former postdoc who faked nearly 60 experiments; an apology and retraction from a cancer researcher; and three retractions from UCLA. Here’s what was happening elsewhere:
- “A doctoral graduate of Tufts University’s veterinary school has filed a $1 million lawsuit against the college, claiming that she faced retaliation after she reported her department for animal abuse and fabricating research.”
- “Most academics have lots of rejection stories. Far fewer have rejection stories like Alison Gerber’s.”
- One scientist says no to peer review requests when she’s on summer holidays. Here’s how Jennifer Rohn explains why that’s better in the long run.
- “Dreher’s paper currently sits in the 99th percentile of research outputs based on the volume of attention it has received online, as ranked by the Altmetric Attention Score.” Why a study of marijuana in pregnancy persists.
- The Lancet has made a “diversity pledge” “in particular to increasing the representation of women.” The journal is the only major medical journal where a woman has never been editor in chief.
- “Why do professors who are women publish less research than men?” And do “biased evaluation committees promote fewer women?”
- “The University of Missouri-Kansas City will pay $360,000 to a professor to settle lawsuits that prompted the investigations into and eventual ouster of two top professors in its pharmacy school.”
- “Facebook Said It Would Give Detailed Data To Academics. They’re Still Waiting.”
- Sophia University has revoked a master’s degree for plagiarism and forgery.
- “Anxious and pushed to a corner, professor Saderla wrote to the institute director on Tuesday saying he is being subjected to mental torture as the [plagiarism] allegations have not been cleared yet.”
- “A researcher at the University of Kansas was indicted Thursday on federal charges of hiding the fact he was working full-time for a Chinese university while doing research at KU funded by the U.S. government.”
- “Former colleagues of a professor who was found guilty of sex offences against male students believe that he could have been stopped much sooner.”
- “How a UCSB chemist stood up to L’Oréal in an IP-theft case:” A jury has awarded the start-up firm Olaplex $91 million for losses because the beauty giant allegedly stole their hair protection product.
- How often are retracted papers still cited?
- “Nothing in our job descriptions requires us to be the best ever with the most publications in the best journals with the most grant money, writes Michael Rocque, so we should stop comparing and ranking ourselves.”
- “The allure of the journal impact factor holds firm, despite its flaws.”
- Why journal indices matter, according to Danielle Padula.
- “I found photographs of my face, my mobile phone number, and home address on Facebook posts,” he says, “with messages like: ‘We will find you and kill you.’”
- “We show that by use of inappropriate and incorrect data collected through a faulty experimental design, poor parameterization of their theoretical model, and selectively picked estimates from literature on detection probability, the inferences of this paper are highly questionable.”
- “Do researchers trust each other’s work?” A new survey says “sometimes.”
- “However, obesity research sometimes is not conducted or reported to appropriate scientific standards.” A new paper “presents[s] 10 errors that are commonly committed…and follow[s] with suggestions on how to avoid these errors.”
- A head of the US NAS, “and members of a panel it convened to advise on prescribing opioids, had recent [undisclosed] links to the drug industry.”
- Osaka University has revoked a PhD it awarded in public policy in 2009 because it was obtained dishonestly, according to the university.
- “Their accounts paint a picture of a lab that was exciting scientifically — but that also had a toxic work environment.”
- “5 features of a highly cited article: The difference between highly cited and lowly cited papers.”
- “We conclude that value pluralism is inherent to codes of conduct in research integrity.”
- “The committee’s answer was, in short, ‘No crisis, but no complacency.’ We saw no evidence of a crisis, largely because the evidence of nonreproducibility and nonreplicability across all science and engineering is incomplete and difficult to assess.”
- “The issue of how to report the statistics is one that we thought about deeply, and I am quite sure we reported them correctly.”
- “Attempts to reach the authors by telephone and fax were also unsuccessful.” Telegraphs were apparently unavailable.
Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up for an email every time there’s a new post (look for the “follow” button at the lower right part of your screen), or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].
Many universities don’t care if even know someone in their university published such problematic papers.
If you don’t review papers in the Summer, then you shouldn’t submit them then either. It’s a give and take honor system.
Given the speed with which reviews and editorial processing are often performed, perhaps a more reasonably suggestion would be not to submit them in the spring.
Re the Tufts prof, Elizabeth Byrnes, the Statcheck bot flagged a couple of her papers for possible statistics issues on PubPeer when that was run a few years ago:
https://pubpeer.com/search?q=Elizabeth+Byrnes
Those are some strange errors. They are all similar to this:
“F(1, 23) = 5.29, p = .032 (recalculated p-value: 0.03086)”. What’s the advantage of having a 0.032 p-value instead of 0.031?
All of the errors are in this vein: off by ~0.01, and never crossing an important threshold (e.g. a 0.06 p-value is never reported as a 0.04). One the one hand, there doesn’t seem to be a good reason for these differences (it can’t be a measurement error or something, and they’re too numerous to be typos). But on the other hand, there’s no apparent advantage to intentionally misreporting multiple p-values in this way. My guess is the statistical software used to prepare the papers had a small bug, or perhaps used settings slightly different settings than statcheck.
You may want to take a look at this.
Thanks. I was aware of the effort, but not the origins. Unfortunately I don’t have the expertise to critique the methods or results.