In what seems like another entry in our occasional “Retraction Watch Mad Libs” series, Elsevier has withdrawn a paper that claimed to link the aluminum in vaccines to behavioral changes in sheep.
Edward J. Fox, a former faculty member at the University of Washington in Seattle, faked data in a manuscript submitted to Nature and in an NIH grant application, according to new findings from the U.S. Office of Research Integrity (ORI).
A researcher at Kyoto University in Japan faked some of the data in a 2017 paper in Science about the deadly Kumamoto earthquake, the university said.
According to mediareports about a press conference held today, Kyoto found that the paper’s first author, Aiming Lin, had committed misconduct, including falsification of data and plagiarism. They recommended that Lin retract the paper, and said he would face sanctions, while his co-authors were cleared of wrongdoing.
Tomorrow is Joe Thomas’s 35th birthday. And earlier this week, he received quite a birthday present, even if it wasn’t intended that way: Thomas earned a $33.75 million payout from a lawsuit he filed against Duke University six years ago.
Retraction Watch readers may recall the name Erin Potts-Kant. We’ve been reporting on retractions by Potts-Kant, a former lab tech at Duke, since 2013. (The count is now 17.) Along the way, we learned that she had been convicted of embezzlement, but that there was a bigger story: There was a False Claims Act case against Duke, Potts-Kant, and Michael Foster, in whose lab she worked, alleging that the university had known that faked data had been included in grant applications.
Three years ago, the American Statistical Association (ASA) expressed hope that the world would move to a “post-p-value era.” The statement in which they made that recommendation has been cited more than 1,700 times, and apparently, the organization has decided that era’s time has come. (At least one journal had already banned p values by 2016.) In an editorial in a special issue of The American Statistician out today, “Statistical Inference in the 21st Century: A World Beyond P<0.05,” the executive director of the ASA, Ron Wasserstein, along with two co-authors, recommends that when it comes to the term “statistically significant,” “don’t say it and don’t use it.” (More than 800 researchers signed onto a piece published in Nature yesterday calling for the same thing.) We asked Wasserstein’s co-author, Nicole Lazar of the University of Georgia, to answer a few questions about the move. Here are her responses, prepared in collaboration with Wasserstein and the editorial’s third co-author, Allen Schirm.