Twelve years after sleuths flagged problematic images in a 2009 paper, the authors — including the head of a UK research institute — have retracted the article.
The paper, published in Genes & Development, has been cited 126 times, according to Clarivate’s Web of Science.
According to the June 1 retraction notice, the authors retracted the paper because of “anomalies in the data presented” in multiple figures. “The issues relate to potential instances of image manipulation, including undisclosed splicing, lane flipping, and lane and panel duplications in the preparation of these figures.”
A home economics journal delisted from Scopus last year has called the decision “biased against journals from developing countries.”
Elsevier delisted the journal Nurture, published by “Nurture Publishing Group,” from the publisher’s citation database in June 2024, after indexing it for a dozen years. In an editorial published this April, Sadie Ahmad, the editorial manager for Nurture, wrote Scopus delisted the journal for three reasons: an increase in the number of scientific articles published, papers in topics beyond the scope of the journal, and an uptick of authors from different countries.
A representative from Elsevier told us Scopus’ decision was also a result of “weak quality” of papers and “low citation metrics compared to what one would expect of a journal with such history and scope.” The journal has been publishing since 2007.
The link to the information page for the book, Mastering Machine Learning: From Basics to Advanced, now returns “Page not found,” and the text is no longer listed under the book series on computer systems and networks.
A dean at an Australian university sought to correct some of his papers. He received a retraction instead.
We wrote last year about Marcel Dinger, dean of science at the University of Sydney, who was a coauthor on five papers with multiple references that had been retracted. In May 2024, Alexander Magazinov, a scientific sleuth and software engineer based in Kazakhstan, had flagged the papers on PubPeer for “references of questionable reliability.” Magazinov credited the Problematic Paper Screener with helping him find them.
Dinger told us at the time he intended to work with editors to determine whether the five papers should be corrected or retracted.
In its second batch of misconduct findings this year, the organization responsible for allocating basic research funding in China has called out 25 researchers for paper mill activity and plagiarism.
The National Natural Science Foundation of China, or NSFC, gives more than 20,000 grants annually in disciplines ranging from agriculture to cancer research. The NSFC publishes the reports periodically “in accordance with relevant regulations,” the first report, released in April, states. The organization awarded 31.9 billion yuan, or about US$4.5 billion, in project funds in 2023.
The concerning figure from the paper, Fig. 2A, with increased contrast, courtesy of “Mycosphaerella arachidis” on PubPeer.
A journal has retracted a 22-year-old-paper whose first author is the integrity officer for the Committee on Publication Ethics over concerns about image editing that “would not be acceptable by modern standards of figure presentation.”
Sleuth Sholto David, who goes by the name “Mycosphaerella arachidis” on PubPeer, raised concerns about the image in December 2023, pointing out a “[d]ark rectangle” that appeared to be “superimposed onto the image.”
A journal will not retract a paper linking use of talc-based baby powder to cancer, despite legal pressure from the pharmaceutical giant that made the product.
A lawyer representing a unit of Johnson & Johnson in May asked editors of the Journal of Occupational and Environmental Medicine to retract a paper on cases of mesothelioma associated with cosmetic talc, following the court-ordered release of the identities of the people described in the article.
The lawyer alleged many of the patients had other exposures to asbestos than cosmetic talc, rendering the article’s fundamental claims “false.”
Twenty journals lost their impact factors in this year’s Journal Citation Reports, released today, for excessive self-citation and citation stacking. Nearly half of the journals on the list are from well-known publishers MDPI, Sage, Springer, Taylor & Francis and Wiley.
Clarivate releases the annual Journal Citation Reports each June. For the first time, the company excluded citations to retracted papers when calculating this year’s impact factors. Amy Bourke-Waite, a communications director for Clarivate, told Retraction Watch this change affected 1% of journals, none of which lost impact factors in 2025.
Many institutions use the controversial metric as an indicator of journal quality. And suppressing a journal’s impact factor can have negative effects on the publication and the authors who publish papers in it.
One journal’s trash is another’s treasure – until a former peer reviewer stumbles across it and sounds an alarm.
In April, communications professor Jacqueline Ewart got a Google Scholar notification about a paper published in the World of Media she had reviewed, and recommended rejecting, for another journal several months earlier.
At the time, she recommended against publishing the article, “Monitoring the development of community radio: A comprehensive bibliometric analysis,” in the Journal of Radio and Audio Media, or JRAM, because she had concerns the article was written by AI. She also noticed several references, including one she supposedly wrote, were fake.
Driving those headlines was a December 2014 study in Science, by Michael J. LaCour, then a Ph.D. student at the University of California, Los Angeles, and Donald Green, a professor at Columbia University.
Researchers praised the “buzzy new study,” as Slate called it at the time, for its robust effects and impressive results. The key finding: A brief conversation with a gay door-to-door canvasser could change the mind of someone opposed to same-sex marriage.
By the time the study was published, David Broockman, then a graduate student at the University of California, Berkeley, had already seen LaCour’s results and was keen to pursue his own version of it. He and fellow graduate student Joshua Kalla had collaborated before and wanted to look more closely at the impact canvassing could have on elections. But as the pair deconstructed LaCour’s study to figure out how to replicate it, they hit several curious stumbling blocks. And when they got a hold of LaCour’s dataset, or replication package, they quickly realized the results weren’t adding up.