Molecular Vision appears to have been flying blind when it retracted a 2013 paper by Rajendra Kadam and colleagues.
In December 2018, Kadam, a former “golden boy” in pharmaceutical research at the University of Colorado, Denver, was the subject of a finding from the U.S. Office of Research Integrity, which stated that he had fabricated his data. As part of the agreement, Kadam agreed to retract a paper in Molecular Vision. .
Wouldn’t it be terrific if manuscripts and published papers could be checked automatically for errors? That was the premise behind an algorithmic approach we wrote about last week, and today we bring you a Q&A with Jennifer Byrne, the last author of a new paper in PLOS ONE that describes another approach, this one designed to find incorrect nucleotide sequence reagents. Byrne, a scientist at the University of Sydney, has worked with the first author of the paper, Cyril Labbé, and has become a literature watchdog. Their efforts have already led to retractions. She answered several questions about the new paper.
When Venkata Sudheer Kumar Ramadugu, then a postdoc at the University of Michigan, admitted to the university on June 28 of last year that he had committed research misconduct in a paper that appeared in Chemical Communications in 2017, he also “attested that he did not manipulate any data in his other four co-authored publications published while at the University of Michigan.”
And so, a few days later, Michael J. Imperiale, the university’s research integrity officer, wrote a letter to the U.S. Office of Research Integrity (ORI) informing them of the findings. On August 2, Ramadagu was terminated from Michigan. And on August 3, Ayyalusamy Ramamoorthy, the head of the lab where Ramadagu had worked, wrote a letter to Chemical Communications requesting retraction of the paper.
It’s become a sort of Retraction Watch Mad Libs: Author writes a paper that is so far, far, out of the mainstream. Maybe it argues that HIV doesn’t cause AIDS. Or that vaccines cause autism. Truth squads swarm over the paper, taking to blogs and Twitter to wonder, in the exasperated tone of those who have been here before, how on earth it was published in a peer reviewed journal.
Then, in something that approaches — but does not quite qualify as — contrition, the journal in question retracts the paper, mumbling something in a retraction notice about a compromised peer review process, or that ghosts in the machine allowed the paper to be published instead of being rejected.
This week’s parade float entry is a paper in the International Journal of Anthropology and Ethnology, a Springer Nature title that is apparently sponsored by The Institute of Ethnology and Anthropology at the Chinese Academy of Social Sciences, where many of its editorial board members work.
Retraction Watch readers may have heard about Fr. Thomas Rosica, a priest who recently apologized for plagiarism and resigned from the board of a college. The case, which involved Rosica’s speeches and popular columns, prompted at least two observers to take a look at his scholarly work.
Maybe you’re a researcher who likes keeping up with developments in scientific integrity. Maybe you’re a reporter who has found a story idea on the blog. Maybe you’re an ethics instructor who uses the site to find case studies. Or a publisher who uses our blog to screen authors who submit manuscripts — we know at least two who do.
An endocrinology journal has pulled a 2017 paper by a group from Russia and Romania because, well, maybe it’s just better if you read for yourself.
The article, “Testosterone promotes anxiolytic-like behavior in gonadectomized male rats via blockade of the 5-HT1A receptors,” appeared in General and Comparative Endocrinology, an Elsevier publication.
How common are calculation errors in the scientific literature? And can they be caught by an algorithm? James Heathers and Nick Brown came up with two methods — GRIM and SPRITE — to find such mistakes. And a 2017 study of which we just became aware offers another approach.
Jonathan Wren and Constantin Georgescu of the Oklahoma Medical Research Foundation used an algorithmic approach to mine abstracts on MEDLINE for statistical ratios (e.g., hazard or odds ratios), as well as their associated confidence intervals and p-values. They analyzed whether these calculations were compatible with each other. (Wren’s PhD advisor, Skip Garner, is also known for creating such algorithms, to spot duplications.)
After analyzing almost half a million such figures, the authors found that up to 7.5% were discrepant and likely represented calculation errors. When they examined p-values, they found that 1.44% of the total would have altered the study’s conclusion (i.e., changed significance) if they had been performed correctly.
We asked Wren — who says he thinks automatic scientific error-checkers will one day be as common as automatic spell-checkers are now — to answer a few questions about his paper’s approach. This Q&A has been slightly edited for clarity.