When Venkata Sudheer Kumar Ramadugu, then a postdoc at the University of Michigan, admitted to the university on June 28 of last year that he had committed research misconduct in a paper that appeared in Chemical Communications in 2017, he also “attested that he did not manipulate any data in his other four co-authored publications published while at the University of Michigan.”
And so, a few days later, Michael J. Imperiale, the university’s research integrity officer, wrote a letter to the U.S. Office of Research Integrity (ORI) informing them of the findings. On August 2, Ramadagu was terminated from Michigan. And on August 3, Ayyalusamy Ramamoorthy, the head of the lab where Ramadagu had worked, wrote a letter to Chemical Communications requesting retraction of the paper.
It’s become a sort of Retraction Watch Mad Libs: Author writes a paper that is so far, far, out of the mainstream. Maybe it argues that HIV doesn’t cause AIDS. Or that vaccines cause autism. Truth squads swarm over the paper, taking to blogs and Twitter to wonder, in the exasperated tone of those who have been here before, how on earth it was published in a peer reviewed journal.
Then, in something that approaches — but does not quite qualify as — contrition, the journal in question retracts the paper, mumbling something in a retraction notice about a compromised peer review process, or that ghosts in the machine allowed the paper to be published instead of being rejected.
This week’s parade float entry is a paper in the International Journal of Anthropology and Ethnology, a Springer Nature title that is apparently sponsored by The Institute of Ethnology and Anthropology at the Chinese Academy of Social Sciences, where many of its editorial board members work.
Maybe you’re a researcher who likes keeping up with developments in scientific integrity. Maybe you’re a reporter who has found a story idea on the blog. Maybe you’re an ethics instructor who uses the site to find case studies. Or a publisher who uses our blog to screen authors who submit manuscripts — we know at least two who do.
An endocrinology journal has pulled a 2017 paper by a group from Russia and Romania because, well, maybe it’s just better if you read for yourself.
The article, “Testosterone promotes anxiolytic-like behavior in gonadectomized male rats via blockade of the 5-HT1A receptors,” appeared in General and Comparative Endocrinology, an Elsevier publication.
How common are calculation errors in the scientific literature? And can they be caught by an algorithm? James Heathers and Nick Brown came up with two methods — GRIM and SPRITE — to find such mistakes. And a 2017 study of which we just became aware offers another approach.
Jonathan Wren and Constantin Georgescu of the Oklahoma Medical Research Foundation used an algorithmic approach to mine abstracts on MEDLINE for statistical ratios (e.g., hazard or odds ratios), as well as their associated confidence intervals and p-values. They analyzed whether these calculations were compatible with each other. (Wren’s PhD advisor, Skip Garner, is also known for creating such algorithms, to spot duplications.)
After analyzing almost half a million such figures, the authors found that up to 7.5% were discrepant and likely represented calculation errors. When they examined p-values, they found that 1.44% of the total would have altered the study’s conclusion (i.e., changed significance) if they had been performed correctly.
We asked Wren — who says he thinks automatic scientific error-checkers will one day be as common as automatic spell-checkers are now — to answer a few questions about his paper’s approach. This Q&A has been slightly edited for clarity.
After years of back and forth, a highly cited paper that appeared to show that gay people who live in areas where people were highly prejudiced against them had a significantly shorter life expectancy has been retracted.
Retraction Watch readers may recall the name Yoshihiro Sato. The late researcher’s retraction total — now at 51 — gives him the number four spot on our leaderboard. He’s there because of the work of four researchers, Andrew Grey, Mark Bolland, and Greg Gamble, all of the University of Auckland, and Alison Avenell, of the University of Aberdeen, who have spent years analyzing Sato’s papers and found a staggering number of issues. Those issues included fabricated data, falsified data, plagiarism, and implausible productivity, among others.In 2017, Grey and colleagues contacted four institutions where Sato or his co-authors had worked, and all four started investigations. In a new paper in Research Integrity and Peer Review, they describe some of what happened next: