A former postdoc at the University of Texas Health Science Center has been found guilty of misconduct stemming from efforts to rig preprint servers to boost the postdoc’s publication metrics.
The findings about Yibin Lin include the fabrication and falsification of data, as well as plagiarism in six published papers that have since been retracted from the preprint server bioRxiv. On none of those articles does the name “Yibin Lin” appear as an author.
Ask Kevin Pile. Pile edits the International Journal of Rheumatic Diseases(let’s call it the IJRD), a Wiley publication. Last year, he published a guest editorial by Vaidehi Chowdhary, a rheumatologist at Yale University in New Haven, Conn., on a form of kidney disease.
But it turns out that Chowdhary, a member of Pile’s editorial team, had intended to submit her article, “When doing the right thing is wrong: Drug efflux pumps in steroid‐resistant nephrotic syndrome,” to a different journal, the Indian Journal of Rheumatology, or IJR. We think you can see how this all went down.
According to Pile, the episode was “a tail of consecutive mistakes”:
One of the many fun things about reporting on retractions is that we get to expand our statistical knowledge. To wit, follow along as we explore the concept of immortal time bias.
A JAMA journal has retracted and replaced a paper by authors at the University of Massachusetts after another researcher identified a critical statistical error in their study.
The paper, “Association of Antibiotic Treatment With Outcomes in Patients Hospitalized for an Asthma Exacerbation Treated With Systemic Corticosteroids,” was written by a group led by Mihaela Stefan, the associate director of the Institute for Healthcare Delivery and Population Science at UMass, and appeared in JAMA Internal Medicine in 2019.
When David Cox noticed on Dec. 10, 2020 that two papers in the journal Cluster Computinglisted him as an author, he didn’t think much of it at first.
I have a common name, so it is not unheard of to have an article written by another David Cox assigned to my profile. I thought that was what these papers must have been at first, but then I opened the articles and saw my affiliation, email, and picture in them.
Shocked, Cox tweeted that “the whole thing is yucky.” The corresponding author on the two studies now says that he plans to withdraw the papers, and that a co-author made the decision to include Cox’s name and has been fired from his research position over the incident. Yesterday, on January 25, the publisher flagged one of the papers.
Cox, who is the IBM Director of the MIT-IBM Watson AI Lab, in Cambridge, Mass., learned about the articles after logging on to DBLP, a bibliography website that tracks articles published by computer scientists. “I check these sites from time to time to make sure everything is correct,” he said.
Not long ago, Amy Barnhorst opened an email from the editor of a journal to which she and a colleague submitted, but ultimately pulled, a paper on gun violence.
The cheery note — “thought you two might be interested to see what we came up with” — announced the publication of a recent article in the Journal of Health Service PsychiatryPsychology by a pair of authors. The title,“Collaborating with Patients on Firearms Safety in High-Risk Situations,” had an unpleasant whiff of irony to it — because the article was, in fact, Barnhorst’s own work. (Barnhorst told us she wanted to wait to name the paper until it was retracted, but the JHSP paper, identified by sleuth Elisabeth Bik, matches passages and descriptions tweeted by Barnhorst.)
As Barnhorst, the vice chair of psychiatry at UC Davis, and the director of the Bullet Points Project, a program to help clinicians prevent firearm injuries among their patients, tweeted:
A journal has retracted a 2018 paper that linked negative news coverage to physical and mental health problems.
The article, “When Words Hurt: Affective Word Use in Daily News Coverage Impacts Mental Health,” was published in Frontiers in Psychology in August 2018. The study has been cited six times, according to Clarivate Analytics’ Web of Science. In March 2020, an article in The Conversation used the study’s findings to argue that kids should reduce their television intake during the coronavirus pandemic to ward off anxiety.
First author Jolie Wormwood, an assistant professor of psychology at the University of New Hampshire, said she decided to pull the study after revisiting the dataset. She found that some of the study participants—95 people in the Boston area—who completed a questionnaire three different times during a nine month period, gave inconsistent answers about their memory of an event. That normally might not be too worrying, since memories “shift over time”, according to Wormwood, but a bit more sleuthing revealed that the researchers had inadvertently mixed up the IDs that were assigned to study participants.
We’re rounding out the week with a third post about paper mills: A Taylor & Francis journal is up to 39 retractions, 18 of which appear to have been the work of at least one such operation.
Last March, The publication, “Artificial Cells, Nanomedicine, and Biotechnology,” issued an expression of concern for 13 of the articles, after a group of data sleuths pointed out problems with the papers.
As Science magazine pointed out at the time, the sleuths, including Elisabeth Bik, found evidence that more than 400 articles generated by the suspected mill contained fabricated images. All of the papers came from research teams based in China, they noted.
A year and a half after its publication, the paper is the subject of two critical blog posts, one by Nick Brown and one by Ethan and Sarah Ludwin-Peery. In the days since we first shared embargoed drafts of those posts with Hall, he and the sleuths engaged in a back and forth, and Brown and the Ludwin-Peerys now say they are satisfied that many of the major issues appear to have been resolved. They have also made changes to their posts, including adding responses from Hall.
In short, it seems like a great example of public post-publication peer review in action. For example, the Ludwin-Peerys write:
When we took a close look at these data, we originally found a number of patterns that we were unable to explain. Having communicated with the authors, we now think that while there are some strange choices in their analysis, most of these patterns can be explained…
In a draft of their post shared with us early last week — see “a note to readers” below — the Ludwin-Peerys said that some of the data in the study “really bothered” them. In particular, they write, the two groups of people studied — 20 received ultra-processed foods, while 20 were given an unprocessed diet — “report the same amount of change in body weight, the only difference being that one group gained weight and the other group lost it.” They were also surprised by the “pretty huge” correlation between weight changes and energy intake.
Brown’s draft post, which digs into the data, concludes:
Of course, nothing of the kind occurred. The carefully curated moment was less informative for its scientific value — in effect, nil — than for what it says about how years of attention-seeking and speculation in biology can drive an agenda. Equally concerning, despite the intervening decade in which other researchers debunked the overhyped result, is that the journal involved has yet to retract the article in question, allowing it to live in a zombie state.
The announcement at the press conference was, to the disappointment of many, the supposed “discovery” of a microbe that could grow on arsenate in the absence of phosphate and incorporate arsenic instead of phosphorus in macromolecules such as nucleic acids and proteins that was being published in Science. Steven Benner (referring to himself as a curmudgeon) was the only individual at the press conference who talked real sense by undermining the claims. Mary Voytek, NASA Senior Scientist for Astrobiology (a position she still occupies) employed a Star Trek analogy: