
Starting around 2023, a curious trend took hold in papers on drug safety monitoring. The number of articles published on an individual drug and its link to specific adverse events went from a steady increase to a huge spike.
The data source in most of those articles was largely the same: The FDA Adverse Events Reporting System, or FAERS. In 2021, around 100 studies mining FAERS for drug safety signals were published. In 2024, that number was 600, with more than that already published this year.
Two journals in particular published the bulk of these papers, Frontiers in Pharmacology and Expert Opinion on Drug Safety. In response to the flood, Frontiers started to require independent validation of studies drawing on public datasets. And Expert Opinion on Drug Safety decided in late July to stop accepting submissions altogether that use the FAERS database for this particular type of study.
“By presenting mere statistical associations as ‘safety signals’, these publications can generate unjustified alarms with considerable impact on healthcare provider practices and patient behaviors,” Charles Khouri, a pharmacologist at Grenoble Alpes University Hospital in France, and colleagues wrote in a preprint posted Sept. 14 that looks at the spike in pharmacovigilance studies.

Source: C. Khouri et al. 2025
The work comes on the heels of sleuths identifying FAERS as one of several publicly available datasets being exploited by paper mills, pumping hundreds of often meaningless and sometimes misleading papers into the scientific literature.
An open database, FAERS contains 31.8 million records voluntarily submitted to the FDA by healthcare professionals, patients and consumers on adverse events and medication errors related to drugs and biologics. While the database is useful flagging adverse events from newly approved drugs, it can be misused as a research tool.
“You can imagine in a large database with maybe millions of different drugs, millions of different adverse events, you can perform an infinite number of statistical analyses,” Khouri told Retraction Watch.
FAERS is a voluntary reporting system, so “only an unknown proportion of all adverse events are reported,” Khouri said. And factors like a drug’s novelty or media attention can affect whether people report adverse events.
That same selectivity and voluntary reporting structure affects the data in the Vaccine Adverse Event Reporting System, or VAERS, jointly managed by the FDA and CDC. Health secretary Robert F. Kennedy Jr. has vowed to overhaul VAERS, even as those campaigning against the use of vaccines publish papers — often retracted — using it. News outlets have reported that Trump administration officials plan to present VAERS data later this week linking COVID-19 vaccines to at least 25 deaths in children who received the shots. As with FAERS data, raw reports in VAERS are unconfirmed, and any associations between an injury and a vaccine may be coincidental.
The FAERS papers flooding the journals are a type of study called a disproportionality analysis, which can identify previously unknown side effects to drugs following their approval. “Maybe 60 to 70 percent of modifications in label information after the commercialization of drugs are coming from a pharmacovigilance database like FAERS,” Khouri said.
Khouri pointed to a disproportionality analysis of a diabetes drug called pioglitazone that found a possible increased risk of bladder cancer. Further studies corroborated the risk and ultimately led to labeling changes for the drug.
From 2019 to 2022, Expert Opinion on Drug Safety published single- to low double-digit numbers of these studies. That began to change in 2023 and exploded in 2024, when the journal published 174 disproportionality studies using FAERS, nearly 60 percent of the total papers it published that year – and as many papers as they had published in total in 2021.
The number of disproportionality studies submitted to the journal “rose significantly,” a Taylor & Francis spokesperson told us. “Even after we had put additional resource[s] in place to handle the journal’s pre-review assessments, this had become a challenging situation to manage,” the spokesperson said. “While disproportionality studies can make a useful contribution to the scholarly literature, such papers can include methodological problems, which led to the journal having a rejection rate of over 80%.”
In late July, Taylor & Francis and the journal’s editor-in-chief, Roger McIntyre, professor of psychiatry and pharmacology at the University of Toronto, decided “the journal will no longer consider unsolicited disproportionality studies using [FAERS] or similar spontaneous reporting databases,” the spokesperson said.
The journal’s website now states, “Such studies will only be considered when specifically invited by the journal’s editorial team.”
Two requests for interviews sent to McIntyre’s university email address were answered by Taylor & Francis spokespeople. McIntyre, who is also the head of the Mood Disorders Psychopharmacology Unit at Toronto’s University Health Network, is a coauthor on five articles in Expert Opinion on Drug Safety that draw on data from FAERS. They include analyses of associations between insomnia drugs, ketamine or GLP-1 agonists and suicide.
In August the journal retracted a paper based on FAERS for including a coauthor on the paper without his consent. “While no other disproportionality studies are currently under investigation by our Publishing Ethics & Integrity team, we are conducting further checks on several articles in the journal,” the spokesperson said.
Frontiers in Pharmacology also contributed to the flood of published studies drawing on FAERS. The journal published about 30 such studies in 2023 — and more than 120 in 2024.
In May 2025, Frontiers introduced a policy across all of its journals requiring “independent external validation of all health-dataset studies,” Elena Vicario, head of research integrity at Frontiers, told us. The move followed a July 2024 policy requiring that validation for submissions using Mendelian randomization methods. “The concern is not the use of FAERS itself but the risk of redundant analyses that add little new scientific understanding,” Vicario said.
Frontiers in Pharmacology has since updated its About page to reinforce this policy, Vicario said. “Since July 2024, 739 FAERS submissions to Frontiers in Pharmacology have been rejected, with 9 published since the updating of the author guidelines in 2025.”
The bolus of newly published FAERS papers have some notable characteristics, Khouri has found, in collaboration with computer scientist and sleuth Cyril Labbé, also at Grenoble Alpes University, Emanuel Raschi from the University of Bologna, Alex Hlavaty from the University of Grenoble, and others.
For example, nearly 80 percent of these studies published in Expert Opinion on Drug Safety between 2019 and 2025 are from authors affiliated with institutions in China. “Chinese authors were totally absent from the field before 2021,” Khouri said.
An analysis of the papers in Expert Opinion on Drug Safety revealed a few authors on multiple articles. At the top of the list is Bin Wu of Sichuan University in Chengdu, who has published 27 disproportionality studies based on FAERS, seven of which appear in Expert Opinion on Drug Safety. Li Chen, also of Sichuan University, has six papers in the journal, and Wei Liu of Zhengzhou University, is a coauthor on at least four studies. Neither Wu, Chen or Liu responded to emailed requests for comment on their field of study, or particular interest in the FAERS database.
Many of the recent papers use multiple statistical methods for the disproportionality analysis, whereas researchers typically use only one method, because multiple tests are redundant. Other similar features across recent papers include a flow chart showing how data were selected, and a mirror plot showing “time to onset.” “We never saw that before,” Khouri said. “The information is quite useless to be twice plotted in the same figure.”


And another shared feature of the recently published disproportionality analyses: “There is no research question,” Khouri said.
Many of the studies also reflect a lack of understanding of the drugs and conditions being described. As an example, Khouri pointed to a paper finding a link between sildenafil and pulmonary hypertension as an adverse event — when in fact the drug is prescribed for that very condition.
Open databases like FAERS paired with the rise of generative AI have opened up a new era of automated paper generation by paper mills, posits Matt Spick, a lecturer in data analytics at the University of Surrey in England. In a July 9 preprint posted to MedrXiv.org, he and colleagues recently identified five databases with anomalies in publishing rates that might indicate paper mill exploitation. Among them: FAERS.
That analysis built on an earlier study Spick did that showed a rapid rise in single-association studies published between 2021 and 2024 that drew on another open data source, the National Health and Nutrition Examination Survey, or NHANES. While the analysis cannot attribute the increase to paper mills specifically, it offers a case study of a strategy paper mills may use, the authors wrote in a paper published in May in PLOS Biology.
“Once NHANES went online, as a paper mill, you were no longer being slowed down by your ability to acquire data, or copy images. You could download as much data as you wanted,” Spick told Retraction Watch.
The flood of papers coming out of these open databases, and FAERS in particular, can waste research dollars if investigators launch studies to validate the signals generated in these studies, Khouri said. And, importantly, these studies can also influence doctors and patients. “We know that when there are safety warnings disseminated in the literature, patients can stop the drugs,” he said. “Prescribers can be influenced by this kind of result, presenting a lot of adverse events for the drugs.”
Aside from the August retraction, only one other paper based on FAERS data has been retracted. The disproportionality analysis had appeared in BioMed Research International and was pulled as part of Wiley’s cleanup of suspected paper mill activity and manipulated peer review in Hindawi journals.
That statistic wasn’t surprising to Khouri. “It’s very difficult to retract these articles for fraud, because there is no fraud,” he said. “The results are nonsense, there is p-hacking and high risk of false results. It’s useless papers, but they are not fake,” he said, quickly adding: “Probably.”
Khouri’s next steps are to dig further into patterns across these papers to see if they can identify common features. And Spick has been working on picking apart precisely how paper mills might use modern technology, including LLMs, to scrape open databases like FAERS and churn out papers at scale.
“It will be hard to force retractions for a lot of these, and then it becomes a whole philosophical thing for meta-scientists,” Spick said: “Should we allow all science to be published, even if it’s meaningless?”
Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on X or Bluesky, like us on Facebook, follow us on LinkedIn, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].