A March paper by researchers at Imperial College London that, in the words of the Washington Post, “helped upend U.S. and U.K. coronavirus strategies,” cited a preprint that had been withdrawn.
Retraction Watch became aware of the issue after being contacted by a PubPeer commenter who had noted the withdrawalearlier this month. Following questions from Retraction Watch this weekend, the authors said they plan to submit a correction.
Elsevier has weighed in on the handling of a controversial paper about the utility of hydroxychloroquine to treat Covid-19 infection, defending the rigor of the peer review process for the article in the face of concerns that the authors included the top editor of the journal that published the work.
On April 3, as we reported, the International Society of Antimicrobial Chemotherapy issued an expression of concern (without quite calling it that) about the paper, which had appeared in March in the International Journal of Antimicrobial Agents, which the ISAC publishes, along with Elsevier. According to the society, the article, by the controversial French scientist Didier Raoult, of the University of Marseille, and colleagues:
The paper that appears to have triggered the Trump administration’s obsession with hydroxychloroquine as a treatment for infection with the novel coronavirus has received a statement of concern from the society that publishes the journal in which the work appeared.
Jasti Rao, who once earned $700,000 a year at the University of Illinois College of Medicine at Peoria and was named the first “Peorian of the Year” before a misconduct investigation put an end to his time there, has now lost eight papers.
Rao’s case is among the more colorful that we’ve covered. A highly-regarded cancer specialist, Rao was caught up in a morass of misdeeds, including not only plagiarism and manipulation of data but gambling and behavior tantamount to extortion of his employees. As we reported in 2018:
Two years ago, Julia Strand, an assistant professor of psychology at Carleton College, published a paper in Psychonomic Bulletin & Review about how people strain to listen in crowded spaces (think: when they’re doing the opposite of social distancing).
The article, titled “Talking points: A modulating circle reduces listening effort without improving speech recognition,” was a young scientist’s fantasy — splashy, fascinating findings in a well-known journal — and, according to Strand, it gave her fledgling career a jolt.
The data were “gorgeous,” she said, initially replicable and well-received:
On the surface, it would seem like a good thing when science undergirds policy decisions. But what if that science is deeply flawed? Craig Pittman, an award-winning journalist at the Tampa Bay Times and author of 4 books, writes that his new book Cat Tale: The Wild, Weird Battle to Save the Florida Panther is “a tale of raw courage, of scientific skulduggery and political shenanigans, of big-money interests versus what’s right for everyone.” In this excerpt, Pittman explains what happened — and what didn’t — after a group of scientists known as the Science Review Team (SRT) found serious problems in research used to support regulatory policies involving panthers.
In 2003, the SRT released a report containing its verdict. As you might guess, it ripped apart Maehr’s work, piece by piece, and yes, they called him out by name. They didn’t label him a fraud, but they made it clear that Dr. Panther had done some pretty shady things.
Because they were scientists, they didn’t scream out their findings in impassioned prose. They were cool and calm—but there was no mistaking what they were saying.
Why is it so difficult to correct the scientific record in sports science? In the first installment in this series of guest posts, Matthew Tenan, a data scientist with a PhD in neuroscience, began the story of how he and some colleagues came to scrutinize a paper. In the second, he explained what happened next. In today’s final installment, he reflects on the editors’ response and what he thinks it means for his field.
In refusing to retract the Dankel and Loenneke manuscript we showed to be mathematically flawed, the editors referred to “feedback from someone with greater expertise” and included the following:
Why is it so difficult to correct the scientific record in sports science? In the first installment in this series of guest posts, Matthew Tenan, a data scientist with a PhD in neuroscience, began the story of how he and some colleagues came to scrutinize a paper. In this post, he explains what happened next.
Two years ago, following heated debate, a sports science journal banned a statistical method from its pages, and a different journal — which had published a defense of that method earlier — decided to boost its statistical chops. But as Matthew Tenan, a data scientist with a PhD in neuroscience relates in this three-part series, that doesn’t seem to have made it any easier to correct the scientific record. Here’s part one.
As it happened, I knew that paper, and I had also expressed concerns about it – when I reviewed it before publication as one of the members of the journal’s editorial board. Indeed, I was brought on to the editorial board of Sports Medicine because the journal had recently received a lot of bad press for publishing a paper about another “novel statistical method” with significant issues and I had been a vocal critic of the sports medicine and sport science field developing their own statistical methods that are not used outside of the field and validated by the wider statistics community.
Bucking the advice of university investigators, a journal founded by Hans Eysenck has issued expressions of concern — not retractions — for three articles by the deceased psychologist whose work has been dogged by controversy since the 1980s.
The move comes barely a week after other journals opted to retract 13 papers by Eysenck, who died in 1997. Those retractions were prompted by the findings of a 2019 investigation by King’s College London, where Eysenck worked until 1983. That inquiry concluded that: