Edward J. Fox, a former faculty member at the University of Washington in Seattle, faked data in a manuscript submitted to Nature and in an NIH grant application, according to new findings from the U.S. Office of Research Integrity (ORI).
Three years ago, the American Statistical Association (ASA) expressed hope that the world would move to a “post-p-value era.” The statement in which they made that recommendation has been cited more than 1,700 times, and apparently, the organization has decided that era’s time has come. (At least one journal had already banned p values by 2016.) In an editorial in a special issue of The American Statistician out today, “Statistical Inference in the 21st Century: A World Beyond P<0.05,” the executive director of the ASA, Ron Wasserstein, along with two co-authors, recommends that when it comes to the term “statistically significant,” “don’t say it and don’t use it.” (More than 800 researchers signed onto a piece published in Nature yesterday calling for the same thing.) We asked Wasserstein’s co-author, Nicole Lazar of the University of Georgia, to answer a few questions about the move. Here are her responses, prepared in collaboration with Wasserstein and the editorial’s third co-author, Allen Schirm.
Get it in writing. That’s the moral in a pair of retractions in different journals after authors claimed to have received oral — but not written — ethics approval for their research.
One paper, in the International Journal of Pediatrics, a Hindawi title, came from a group in Kuwait and Greece. Titled “Prevalence and associated factors of peer victimization (bullying) among grades 7 and 8 middle school students in Kuwait, the article appeared in February 2017.
Wouldn’t it be terrific if manuscripts and published papers could be checked automatically for errors? That was the premise behind an algorithmic approach we wrote about last week, and today we bring you a Q&A with Jennifer Byrne, the last author of a new paper in PLOS ONE that describes another approach, this one designed to find incorrect nucleotide sequence reagents. Byrne, a scientist at the University of Sydney, has worked with the first author of the paper, Cyril Labbé, and has become a literature watchdog. Their efforts have already led to retractions. She answered several questions about the new paper.
When Venkata Sudheer Kumar Ramadugu, then a postdoc at the University of Michigan, admitted to the university on June 28 of last year that he had committed research misconduct in a paper that appeared in Chemical Communications in 2017, he also “attested that he did not manipulate any data in his other four co-authored publications published while at the University of Michigan.”
And so, a few days later, Michael J. Imperiale, the university’s research integrity officer, wrote a letter to the U.S. Office of Research Integrity (ORI) informing them of the findings. On August 2, Ramadagu was terminated from Michigan. And on August 3, Ayyalusamy Ramamoorthy, the head of the lab where Ramadagu had worked, wrote a letter to Chemical Communications requesting retraction of the paper.
Retraction Watch readers may have heard about Fr. Thomas Rosica, a priest who recently apologized for plagiarism and resigned from the board of a college. The case, which involved Rosica’s speeches and popular columns, prompted at least two observers to take a look at his scholarly work.
How common are calculation errors in the scientific literature? And can they be caught by an algorithm? James Heathers and Nick Brown came up with two methods — GRIM and SPRITE — to find such mistakes. And a 2017 study of which we just became aware offers another approach.
Jonathan Wren and Constantin Georgescu of the Oklahoma Medical Research Foundation used an algorithmic approach to mine abstracts on MEDLINE for statistical ratios (e.g., hazard or odds ratios), as well as their associated confidence intervals and p-values. They analyzed whether these calculations were compatible with each other. (Wren’s PhD advisor, Skip Garner, is also known for creating such algorithms, to spot duplications.)
After analyzing almost half a million such figures, the authors found that up to 7.5% were discrepant and likely represented calculation errors. When they examined p-values, they found that 1.44% of the total would have altered the study’s conclusion (i.e., changed significance) if they had been performed correctly.
We asked Wren — who says he thinks automatic scientific error-checkers will one day be as common as automatic spell-checkers are now — to answer a few questions about his paper’s approach. This Q&A has been slightly edited for clarity.
After years of back and forth, a highly cited paper that appeared to show that gay people who live in areas where people were highly prejudiced against them had a significantly shorter life expectancy has been retracted.
Retraction Watch readers may recall the name Yoshihiro Sato. The late researcher’s retraction total — now at 51 — gives him the number four spot on our leaderboard. He’s there because of the work of four researchers, Andrew Grey, Mark Bolland, and Greg Gamble, all of the University of Auckland, and Alison Avenell, of the University of Aberdeen, who have spent years analyzing Sato’s papers and found a staggering number of issues.
Those issues included fabricated data, falsified data, plagiarism, and implausible productivity, among others.In 2017, Grey and colleagues contacted four institutions where Sato or his co-authors had worked, and all four started investigations. In a new paper in Research Integrity and Peer Review, they describe some of what happened next:
When it comes to plagiarism, there is apparently no statute of limitations.
That’s one lesson one might take from this tale of two papers, one published in 1984 in the American Journal of Obstetrics and Gynecology (AJOG), and the other published in 2000 in the Medical Journal of The Islamic Republic of Iran (MJIRI). Both are titled “The use of breast stimulation to prevent postdate pregnancy.”