Mega-journal Heliyon retracts hundreds of papers after internal audit 

Heliyon has published fewer papers and ramped up its retractions since a major indexing service put the journal on hold and the publisher launched an audit of all papers published in the journal since its launch in 2016.

Clarivate put Heliyon on hold in September 2024, citing concerns about the quality of the content. The “on-hold” status indicates a journal is being re-evaluated, and new content isn’t indexed, according to documentation on the Clarivate website. A spokesperson for Clarivate told us they couldn’t comment on specific journals, but said a journal must be both taken off hold and have its missing content backfilled by August 1 in order to receive an impact factor for that year. If a journal is still on hold and content hasn’t been backfilled, the journal will not receive an impact factor, the spokesperson said.

Heliyon published over 11,000 papers in 2023 and more than 17,000 in 2024, issuing around two dozen retractions in each year. Last year, the journal published 3,168 articles and retracted 392 others. 

So far this year, 37 papers have been retracted, and 144 have been published. The retraction notices cite reference issues, citation stacking, suspicious affiliations and areas of researchauthorship changes, ethical approval, among others. 



The journal’s editorial director, Christopher Schulz, declined to comment about whether the uptick in retractions was a result of the journal being put on hold, as did the journal’s publishers, Cell Press and Elsevier. 

Queen Muse, the head of media and communications at Cell Press, pointed us to a statement the journal published last April. The statement says Elsevier performed an audit of the journal that “uncovered concerns regarding practices that do not align with our policies, such as citation manipulation, compromised peer review, authorship irregularities, and tortured phrases.”

As a result, Elsevier started an investigation involving “all published articles between the launch of the journal and the present day for integrity and ethics concerns,” the statement says. It also states the investigation is ongoing, and also includes “making improvements to the journal’s workflow to help prevent ethical issues in new submitted papers.” 

Update, Feb. 4, 2026: The second paragraph was edited to clarify a journal’s eligibility for receiving an impact factor.


Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on X or Bluesky, like us on Facebook, follow us on LinkedIn, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].


Processing…
Success! You're on the list.

22 thoughts on “Mega-journal Heliyon retracts hundreds of papers after internal audit ”

  1. Heliyon, like an increasingly large number of Elsevier journals, has no real academic editorial board. The board is for the most part internal scientific editors, employees. The incentive is of course towards accepting papers.
    A deeper dive into the spread of these in the Elsevier system would be interesting.

  2. I am the corresponding author of three Heliyon papers that were retracted despite our submission of clear documentary evidence, methodological explanations, and scientific proofs confirming the validity of the work.
    In all three cases, no data fabrication, plagiarism, or methodological flaws were identified. The analyses, results, and conclusions remain scientifically sound.
    One of the stated justifications for retraction was the use of “non-related references.” This rationale is deeply problematic. In industrial engineering, human factors, behavioral modeling, SEM, and applied analytics, methodological and theoretical references are inherently interdisciplinary. Assessing “relatedness” requires domain expertise in these fields.
    It is therefore particularly concerning that such a judgment was made by editorial leadership without formal training or specialization in industrial engineering or its methodological subfields. Declaring references “non-related” in this context reflects a fundamental misunderstanding of interdisciplinary engineering research, not a breach of publication ethics.
    Even more troubling, during correspondence the Editor-in-Chief questioned whether my papers were “related to my field.” In the broader scientific community, this is not a meaningful or acceptable criterion for retraction—especially when an author’s publication record clearly establishes expertise across these domains.
    Retraction should be reserved for proven unreliability or misconduct, not subjective editorial opinions about reference selection or disciplinary boundaries. Using retraction in this manner undermines due process and risks turning integrity audits into instruments of arbitrary gatekeeping.
    I fully support research integrity efforts, but integrity also demands evidence-based decisions, proportionality, and transparent acknowledgment of author rebuttals. I am willing to share full timelines, case IDs, correspondence, and evidence with Retraction Watch for independent review.

    1. Dear Yogi Tri Prasetyo,
      It seems you forgot to mention a few crucial details regarding your, most definitely warranted, retractions.
      First of all, paper nr 1 (https://doi.org/10.1016/j.heliyon.2022.e12538) got retracted because 6 (!) papers were added during the proof reading stage (after review/editing). This is not done and a correct reason to retract, especially given those papers were self-citations (including for you).
      Secondly, paper nr 2 (https://doi.org/10.1016/j.heliyon.2022.e11205) about COVID and tuition fee got retracted because of completely irrelevant (self-citations) references such as ‘Consumer Preference Analysis on Attributes of Milk Tea: A Conjoint Analysis Approach’
      Lastly, and this is the most shocking one and a rather important detail, paper 3 got retracted because one of the authors acted as a reviewer. The rest of the reasons are rather irrelevant if an author review his/her own paper: https://doi.org/10.1016/j.heliyon.2023.e13798

      1. Your reason for paper 1 is completely nonsense! I wonder if you ever did proofing or not! If adding/removing references were problrmatic no such option was availble.

        It can be problematic only when someone intentiobally add irrevalent self ciration. Even in this case the jourbal should ask for correction not retraction

      2. Dear Jan,

        Paper 1:

        1. Justification for the Addition of References
        The references added during the production phase were included to enhance the methodological and theoretical rigor of the study. Each reference directly supports our research’s usability assessment models, technology acceptance theories, and analytical approach. Below is a detailed justification for each reference:
        Yuduang et al., 2022: This study utilized a hybrid SEM-artificial neural network (ANN) approach to assess mobile application usability. As our research integrates SEM with machine learning (Random Forest Classifier, RFC), this reference was essential to support our methodology. The integration of SEM with machine learning remains a novel approach, particularly in health-related technology adoption, making its inclusion highly relevant.
        Prasetyo and Soliman, 2021: This study evaluated the usability of ERP systems and emphasized the role of trust in technology adoption. Given that trust is a critical factor in user adoption of environmental monitoring applications like AirVisual, this reference supports our discussion on trust as a key determinant of usability perception.
        Ong et al., 2022: This study applied machine learning techniques (RFC and ANN) in usability assessment for COVID-19 contact-tracing applications. The methodological alignment with our study’s use of RFC justified its inclusion. Furthermore, the use of UTAUT2 as a framework in this research reinforces its theoretical applicability to our study.
        Gumasing et al., 2023: This study explored user segmentation using K-Means clustering and employed SEM to assess usability factors, aligning with our methodological approach. Since AirVisual is used by diverse user groups, this reference supports insights into user segmentation and behavioral analysis.
        Chuenyindee et al., 2022 (Learning Management System Usability): This study integrated the System Usability Scale (SUS) with UTAUT2, similar to our research, which evaluates usability through technology acceptance models. Since AirVisual is a digital platform, insights from this reference helped refine our research framework.
        Chuenyindee et al., 2022 (Thai Chana Contact-Tracing Application Usability): This study examined usability perceptions of a mobile health-related application using UTAUT2, similar to our study’s framework. Both AirVisual and Thai Chana support public awareness and decision-making (air quality vs. health safety), making this reference relevant.
        Duarte and Pinho, 2019: This study assessed mobile health adoption using UTAUT2, closely aligning with our research objectives. Additionally, the integration of machine learning algorithms to analyze human factors provides strong support for our methodological approach.
        Jun et al., 2019: This study combined Protection Motivation Theory (PMT) with UTAUT2 to assess risk perception in technology adoption. Given that AirVisual is used primarily by individuals concerned with air pollution risk, the application of PMT in risk-related technology adoption made this reference valuable.

        This is FUNNY if you do not think that those papers are irrelevant.

        Paper 2:

        Consumer Preference Analysis on Attributes of Milk Tea: A Conjoint Analysis Approach

        HEY. It’s a paper about conjoint analysis. ARE YOU STUPID or what?

        Paper 3:

        The papers were heavily checked by 3 other reviewers and that one reviewer did not even send a comment regarding the paper because it was a mistake by the Editor-in-Chief (the previous on).

        Anyway, this journal is definitely DANGEROUS.

      3. Another explanation for Paper 2:

        This study is an applied conjoint analysis paper, a well-established method that our team has extensively utilized in past research. We have published more than 10 journal articles using this method, demonstrating our expertise and experience in this field. Some of our previous studies using conjoint analysis include:

        1. “Young adult preference analysis on the attributes of COVID-19 vaccine in the Philippines: A conjoint analysis approach” – Public Health in Practice (Elsevier), Volume 4, December 2022, 100300

        2. “Preference analysis on the online learning attributes among senior high school students during the COVID-19 pandemic: A conjoint analysis approach” – Evaluation and Program Planning (Elsevier), Volume 92, June 2022, 102100

        3. “The Evaluation of the Local Beer Industry during the COVID-19 Pandemic and Its Relationship with Open Innovation” – Journal of Open Innovation: Technology, Market, and Complexity, Volume 8, Issue 3, September 2022, 127

        1. Paper 1 you shared above (“Young adult preference analysis…”) includes this sentence:
          “Convenience sampling was utilized as the sampling criterion to ensure that the probability of being chosen is equal among the population.”
          Do you understand why this paper should probably be retracted too?

          1. That’s stupid and funny actually if that is a reason of retraction. How come you think you are smarter than reviewers who checked the paper? Silly.

  3. The included graph should be log-lin. I cannot accept the paper on account of the flawed data presentation.
    (PS: I couldn’t resist being the infamous anonymous reviewer number 2😂)

  4. I cannot accept your article for publication on account of the flawed representation of the included graph. The graph should be log-Lin. Moreover, each graph should be labeled correctly and assigned a number.

  5. Fairly obviously, if a journal is published for profit, and the authors pay to be published, and there is no negative effect of publishing rubbish, then journals will end up publishing rubbish. The whole scientific community is to blame for allowing rubbish publications to count in the various metrics.

    1. Dropping their standards. For some academics it only matters that their paper is published in a peer reviewed journal.

    2. A few years ago, we submitted a manuscript to an Elsevier journal. Before any assessment and review, the editorial office (not EiC) suggested us to accept transfer to Heliyon journal. We rejected the offer. Such a mass-transfeer can increase published papers three times in a year. Sort of cheating by Elsevier.

  6. Waste of time.
    Instead just remove their indexing from PubMed till they can properly vet papers they are accepting.
    Randomly withdrawing a few hundreds of papers when they are publishing tens of thousands is barely much.
    Going backwards to retract papers for misc reasons is a genuine waste of resources for such paper mills. Best approach is to cut their loss and look ahead and stop the cancer from propagating
    Publish a thousand at most, but quality thousand pubs. What’s the need to publish 20k a year?

  7. Dr. Prasetyo, you write, “That’s stupid and funny actually if that is a reason of retraction. How come you think you are smarter than reviewers who checked the paper? Silly.”

    If the authors and peer reviewers are incapable of understanding that this sentence is utter nonsense, then yes, I claim to be smarter than them.

    I flagged this and other astonishingly bad content in several papers on which you are an author today on PubPeer. I suggest you respond to those concerns more constructively, as I also alerted the journals to those concerns and others I did not post publicly.

    Best.

    1. According to established editorial and COPE guidelines, retraction is reserved for cases involving data fabrication, falsification, plagiarism, ethical violations, or manipulation of the publication process. None of these apply here. The data, analysis, results, and conclusions of the paper remain unchanged and valid. There is no evidence of intent to mislead, nor any distortion of empirical findings arising from this wording.

      The appropriate and proportionate response to such an issue is a corrigendum or erratum, which exists precisely to correct methodological descriptions, clarify terminology, or amend textual inaccuracies while preserving the scientific record. Retraction would be justified only if the methodological flaw invalidated the study’s results—which it does not.

      Incorrect phrasing ≠ scientific misconduct
      Methodological clarification ≠ data falsification
      Editorial correction ≠ retraction

      You should understand this “smarter”.

      Good luck!

      1. Ah, I understand your point.

        The problem here is that this is clear evidence that the authors made a fundamental error that they should not have made (they made the same error in at least one other paper), suggesting they do not understand statistics. This isn’t merely a typographical error… this undermines the later analysis in a way that was not discussed.

        Further, this fundamental error was not caught during editorial or peer review, suggesting that this process was terribly flawed.

        I view this not as the *reason* for the retraction, but as the “canary in the coal mine” that indicates that the paper should not have been published.

        I suspect after reviewing the paper, and the peer review process, the journal will find other even more material problems.

        1. I fully acknowledge that the sentence should have been phrased more accurately, and such imprecision would normally be addressed through revision or corrigendum, rather than being interpreted as undermining the validity of the results or warranting post-publication conclusions about author competence or peer-review integrity.

          For the rest, we remain confident that the substantive findings, data handling, and analytical procedures are methodologically sound and transparently reported.

          While the paper has now been formally retracted, I firmly disagree with the stated rationale for this decision. The retraction was not based on deficiencies in the study’s results, statistical analyses, or the comprehensiveness of the discussion. All of which were extensively evaluated, revised, and approved through multiple rounds of peer review and editorial assessment by the original reviewers and handling editors. No methodological, analytical, or ethical errors were identified that would compromise the validity of the findings. Instead, the decision relied narrowly on assertions regarding non-related references, a criterion that is neither sufficient nor proportionate to justify retraction under established scholarly publishing standards.

          Framing such a correctable and non-substantive issue as grounds for retraction disregards the documented rigor of the review process and misrepresents the scientific integrity of the work.

        2. It is also important to contextualize this research. Data collection was conducted during the COVID-19 pandemic, when mobility restrictions, lockdowns, health risks, and public anxiety were at their peak. Obtaining 865 valid respondents under those conditions was extremely challenging. Recruiting such a large sample size during a global health crisis required substantial effort, coordination, and ethical consideration. The dataset itself reflects serious empirical work, not negligence.
          However, pointing to one wording inconsistency and concluding that “no one affiliated with the journal or publisher read the paper” is a disproportionate leap. The manuscript underwent peer review prior to publication. A minor drafting error does not invalidate the conjoint analysis design, orthogonal experimental structure, or the robustness of the preference estimates derived from the data.
          Constructive academic critique is always welcome. What is concerning is the tendency to amplify small textual imperfections into claims of systemic failure or grounds for retraction, especially under anonymous commentary. Retractions are reserved for fundamental methodological flaws, data fabrication, or ethical violations, not for imprecise phrasing that can be corrected through standard editorial processes.
          If the concern is clarity, a correction is appropriate. But portraying this as evidence that the paper should not have passed peer review is neither balanced nor academically responsible.

  8. It’s a complex and frustrating issue, but it’s encouraging to see it being discussed openly in forums like Retraction Watch. The more light that is shed on these systemic pressures and failures, the greater the chance for meaningful change. The ultimate goal should be a research environment that values and rewards genuine contribution and integrity, allowing scientists to focus on doing good science, not just on producing more papers.

Leave a Reply to JonCancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.