Semi-automated fact-checking for scientific papers? Here’s one method.

Jennifer Byrne

Wouldn’t it be terrific if manuscripts and published papers could be checked automatically for errors? That was the premise behind an algorithmic approach we wrote about last week, and today we bring you a Q&A with Jennifer Byrne, the last author of a new paper in PLOS ONE that describes another approach, this one designed to find incorrect nucleotide sequence reagents. Byrne, a scientist at the University of Sydney, has worked with the first author of the paper, Cyril Labbé, and has become a literature watchdog. Their efforts have already led to retractions. She answered several questions about the new paper.

Retraction Watch (RW): Seek & Blastn allows for “semi-automated fact-checking of nucleotide sequence reagents.” Can you explain what these reagents are used for, and what Seek & Blastn does? Continue reading Semi-automated fact-checking for scientific papers? Here’s one method.

A university thought its misconduct investigation was complete. Then a PubPeer comment appeared.

When Venkata Sudheer Kumar Ramadugu, then a postdoc at the University of Michigan, admitted to the university on June 28 of last year that he had committed research misconduct in a paper that appeared in Chemical Communications in 2017, he also “attested that he did not manipulate any data in his other four co-authored publications published while at the University of Michigan.”

And so, a few days later, Michael J. Imperiale, the university’s research integrity officer, wrote a letter to the U.S. Office of Research Integrity (ORI) informing them of the findings. On August 2, Ramadagu was terminated from Michigan. And on August 3, Ayyalusamy Ramamoorthy, the head of the lab where Ramadagu had worked, wrote a letter to Chemical Communications requesting retraction of the paper.

While the retraction would not appear until the end of November, and ORI sanctions not announced until the end of December, Michigan’s responsibilities seemed to have been discharged as of early August. But documents obtained by Retraction Watch through a public records request detail how that was not the end of the story. Continue reading A university thought its misconduct investigation was complete. Then a PubPeer comment appeared.

Plagiarism prompts retraction of 25-year-old article by prominent priest

Fr. Thomas Rosica

Retraction Watch readers may have heard about Fr. Thomas Rosica, a priest who recently apologized for plagiarism and resigned from the board of a college. The case, which involved Rosica’s speeches and popular columns, prompted at least two observers to take a look at his scholarly work.

One of those observers was Michael Dougherty, who has a well-earned reputation as the plagiarism police squad in certain fields and recently published Correcting the Scholarly Record for Research Integrity: In the Aftermath of Plagiarism. Dougherty wrote a letter to the journal in question, Worship, on February 20th.

In the letter, which we have posted here, Dougherty writes: Continue reading Plagiarism prompts retraction of 25-year-old article by prominent priest

Will scientific error checkers become as ubiquitous as spell-checkers?

Jonathan Wren

How common are calculation errors in the scientific literature? And can they be caught by an algorithm?  James Heathers and Nick Brown came up with two methods — GRIM and SPRITE — to find such mistakes. And a 2017 study of which we just became aware offers another approach.

Jonathan Wren and Constantin Georgescu of the Oklahoma Medical Research Foundation used an algorithmic approach to mine abstracts on MEDLINE for statistical ratios (e.g., hazard or odds ratios), as well as their associated confidence intervals and p-values. They analyzed whether these calculations were compatible with each other. (Wren’s PhD advisor, Skip Garner, is also known for creating such algorithms, to spot duplications.)

After analyzing almost half a million such figures, the authors found  that up to 7.5% were discrepant and likely represented calculation errors. When they examined p-values, they found that 1.44% of the total would have altered the study’s conclusion (i.e., changed significance) if they had been performed correctly.  

We asked Wren — who says he thinks automatic scientific error-checkers will one day be as common as automatic spell-checkers are now — to answer a few questions about his paper’s approach. This Q&A has been slightly edited for clarity.

Retraction Watch (RW): What prompted you to perform your study? Continue reading Will scientific error checkers become as ubiquitous as spell-checkers?

Study claiming hate cuts 12 years off gay lives retracted

Low Library, Columbia University

After years of back and forth, a highly cited paper that appeared to show that gay people who live in areas where people were highly prejudiced against them had a significantly shorter life expectancy has been retracted.

The paper, “Structural stigma and all-cause mortality in sexual minority populations,”  was published in 2014 by Mark Hatzenbuehler of Columbia University and colleagues. As we reported last year, Mark Regnerus, of the University of Texas at Austin, published a paper describing his failed attempts to replicate the study in 2016: Continue reading Study claiming hate cuts 12 years off gay lives retracted

New study finds “important deficiencies” in university reports of misconduct

via NASA

Retraction Watch readers may recall the name Yoshihiro Sato. The late researcher’s retraction total — now at 51 — gives him the number four spot on our leaderboard. He’s there because of the work of four researchers, Andrew Grey, Mark Bolland, and Greg Gamble, all of the University of Auckland, and Alison Avenell, of the University of Aberdeen, who have spent years analyzing Sato’s papers and found a staggering number of issues.

Those issues included fabricated data, falsified data, plagiarism, and implausible productivity, among others. In 2017, Grey and colleagues contacted four institutions where Sato or his co-authors had worked, and all four started investigations. In a new paper in Research Integrity and Peer Review, they describe some of what happened next:

Continue reading New study finds “important deficiencies” in university reports of misconduct

Journal to retract article from 2000 that plagiarized one from 1984

Image by Nick Youngson via Image Creator CC BY-SA 3.0

When it comes to plagiarism, there is apparently no statute of limitations.

That’s one lesson one might take from this tale of two papers, one published in 1984 in the American Journal of Obstetrics and Gynecology (AJOG), and the other published in 2000 in the Medical Journal of The Islamic Republic of Iran (MJIRI). Both are titled “The use of breast stimulation to prevent postdate pregnancy.”

Here’s the abstract of the AJOG article, written by two researchers at the Letterman Army Medical Center in San Francisco, California: Continue reading Journal to retract article from 2000 that plagiarized one from 1984

Journal retracts more than 400 papers at once

Ladies and gentlemen, we appear to have a new record.

The Journal of Fundamental and Applied Sciences (JFAS) recently retracted 434 articles from three issues of their journal. Yes, 434, giving it more retractions than any other journal ever, according to our records.

All of the articles, on topics ranging from “Effect of olive leaf extract on calcaeous deposit from sea” to “Optimization of mobile user data sharing on secure cloud,” have now been replaced with this notice: Continue reading Journal retracts more than 400 papers at once

Found in translation: Authors blame language barriers after forging co-authors

When the merde hits the fan, blame the translator. That’s Rule 1 of botched international diplomacy — and, evidently, botched international science.

Otolaryngology researchers in China have lost their 2018 paper in the American Journal of Translational Research for what they’re calling (with some degree of chutzpah) language barriers.

The article, “Therapeutic ultrasound potentiates the anti-nociceptive and anti-inflammatory effects of curcumin to postoperative pain via Sirt1/NF-κB signaling pathway,” came from group whose primary affiliation was the Second Military Medical University in Shanghai. (It hasn’t been cited, according to Clarivate’s Web of Science.) However, the list of authors also included several scientists in Germany.

Evidently, the Germans were most unzufrieden.

According to the retraction notice: Continue reading Found in translation: Authors blame language barriers after forging co-authors

Legal threats once again force corrections over a scale measuring medication usage

Donald Morisky

A journal is warning contributors that they should avoid using a controversial scale for assessing adherence to medication regimens or they might wind up wearing an omelette on their faces.

The chicken here, of course, is the Morisky Medication Adherence Scale. The instrument was developed by a UCLA professor named Donald Morisky, who with a colleague named Steve Trubow threatens to sue anyone who they believe misuses the tool after failing to obtain a license.

As we have detailed on this blog and in Science, many researchers report that Morisky and Trubow seem to set traps for them, ignoring their requests for a license then hammering them with demands for citations, money — often tens of thousands of dollars or more — or both once their work has been published. Failure to comply, the pair assert, could lead to a lawsuit. (Morisky sometimes fails to note his own financial conflict here, as he did in this 2017 paper in PLOS One touting the accuracy of his tool.) Continue reading Legal threats once again force corrections over a scale measuring medication usage