Article that assessed MDPI journals as “predatory” retracted and replaced

A 2021 article that found journals from the open-access publisher MDPI had characteristics of predatory journals has been retracted and replaced with a version that softens its conclusions about the company. MDPI is still not satisfied, however. 

The article, “Journal citation reports and the definition of a predatory journal: The case of the Multidisciplinary Digital Publishing Institute (MDPI),” was published in Research Evaluation. It has been cited 20 times, according to Clarivate’s Web of Science. 

María de los Ángeles Oviedo García, a professor of business administration and marketing at the University of Seville in Spain, and the paper’s sole author, analyzed 53 MDPI journals that were included in Clarivate’s 2018 Journal Citation Reports. 

Oviedo García assessed each journal by eight criteria associated with predatory publications, including self-citation. She also compared the MDPI journals to the journals with the highest impact factor in their subject category. The original abstract described her findings like this: 

The formal criteria together with the analysis of the citation patterns of the 53 journals under analysis all singled them out as predatory journals. 

Soon after the paper was published in July 2021, MDPI issued a “comment” about the article that responded to Oviedo García’s analysis point by point. The comment called out “the misrepresentation of MDPI, as well as concerns around the accuracy of the data and validity of the research methodology.”

In September 2021, Research Evaluation published an expression of concern about the article that stated: 

The journal and publisher have been alerted to concerns about this article, and an investigation is in progress. In the interim, we alert readers that these concerns have been raised.

The article was retracted and replaced with a revised version earlier this month. The notice, which is labeled as a “correction,” stated that the replacement addressed:

concerns about conclusions drawn in the article. The conclusions in the updated article are reached based on cited sources.

We asked Oviedo García what led to the retraction and replacement, and she told us: 

In a nutshell, after the publication of the article both the journal’s editors and the publisher received communications raising concerns about it. Then, that original version was revised and have been now published replacing the old version of the article. 

Oviedo García told us that she did not “have full details” about who raised concerns about the article. “The revision of the article was a joint work between the publisher and myself,” she said. 

Thed van Leeuwen, a senior researcher at the Centre for Science and Technology Studies of Leiden University in the Netherlands, and an editor of Research Evaluation, has not responded to our request for comment. 

Language throughout the article was changed to describe the findings less definitively. (See a comparison we created here.) The sentence in the abstract we quoted above now states that the analysis of the 53 journals “suggest[s] they may be predatory journals.” 

Critical language remains in the new version, such as this discussion of the MDPI journals’ huge and increasing number of special issues:  

The fact that the number of special issues in JCR-indexed MDPI-journals is so much higher than the number of ordinary issues per year coupled with their constant increase since 2018 inevitably awakens suspicions of a lucrative business aim. 

The revision removes some references to MDPI’s temporary inclusion on librarian Jeffrey Beall’s list of “potentially predatory” publishers, along with links to an archived version of the list, which was taken down in 2017.  

The revised version also includes additional caveats about the limitations of the work, such as which analyses Oviedo García did not conduct on the control group of top-ranked journals. 

For instance, rather than the original statement that the uniformly short review times at MDPI journals were “highly questionable,” the paper now states:

As such the question arises whether or not this speed is achieved with a thorough peer review in line with editorial and publishing best practices or if the rigor and quality of the peer review process is compromised in order to achieve these speeds. It is beyond the scope of this research to answer that question based on the analysis conducted, further research is needed to address this key question. 

A new paragraph in the section discussing the article’s limitations calls for further research to compare MDPI journals with other journals with similar impact factors, rather than the top-ranked journal in the subject area, as Oviedo García had done. MDPI’s comment on the article specifically called the comparison of its journals to those with the highest impact factors “flawed,” among many other critiques.

After Research Evaluation published the expression of concern about the paper, MDPI contacted Oxford University Press, the journal’s publisher, to follow up on the status of the article multiple times, said Giulia Stefenelli, chair of MDPI’s Board of Scientific Officers. But, Stefenelli told us, “We did not receive any response from OUP that showed progress on the handling of this paper, nor did we receive an update when this paper was retracted and replaced by a revised version.”

Stefenelli expressed dissatisfaction with the journal’s process, and the republished version of the paper:  

We were expecting more transparency and communication, and to be informed at key stages of the progression of the investigation. As we have demonstrated, the original article contained serious flaws in its methodology, which have yet to be addressed or corrected. This was highlighted in our initial comment on the article (https://www.mdpi.com/about/announcements/2979).

We would expect more details to be made publicly available for the readers so they can understand why this article was retracted and corrected. We found no track record of the original version retracted nor record about the corrections that have been made. In this case, we question OUP’s processing of this retraction/correction, as it would appear to be against COPE guidelines and standard practices.

The article sends a strong message affecting MDPI directly and ultimately begs the question, is this an article the public can trust? It is important to note that a retraction is issued when there is clear evidence that the findings are unreliable, as a result of major error. The fact that this article was retracted raises questions about the details of the significant changes made in order for it to be republished.

Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on Twitter, like us on Facebook, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].

56 thoughts on “Article that assessed MDPI journals as “predatory” retracted and replaced”

  1. “We did not receive any response from [the publisher] that showed progress on the handling of this paper, nor did we receive an update when this paper was retracted and replaced by a revised version.”

    Funny. That mirrors my experience with MDPI when I reported plagiarism in one of their journals.

    1. This is not correct/right because the rejection should be based on the quality and not the publisher/MDPI.

  2. I believe (or at least hope) that journals of high standards have referees who recommend submitted manuscripts for rejection whenever they contain references to articles in mdpi journals.
    That way at least the damages remain somewhat contained.

    1. How is it even scientific to reject a paper based not on its quality, but based on its publisher?! Not even the authors or editors or at least the journal. But the publisher!

      Yes, MDPI has many many problems, but that doesn’t mean all its articles are unanimously garbage. Otherwise, JCR or Pubmed would not list many of its journals.

      Each paper must be evaluated separately regardless of its authors, editors, reviewers, journal, or publisher.

      1. That’s all well and good, but the fact remains that I, as a researcher, don’t have time to investigate in depth the data sources and integrity of every article I read and cite. I must as a matter of necessity rely on heuristics, and one of those heuristics is editorial reputation. MDPI journals have been found – by this very site, I might add – to cut corners when it comes to verifying data integrity and safeguarding the quality of the peer review process. That makes it difficult to give MDPI journals the benefit of the doubt when the merits of my own work will be judged, in part at least, according to whether I am citing trustworthy research.

        1. I really don’t think your characterization of all mdpi journals is fair. I’m a basic scientist in the biomedical field and I have both published in mdpi and reviewed for mdpi. I review and publish elsewhere as well, but some of my most compelling scientific works have been in MDPI. I think it is fair however that solicitations for articles and the exorbitant special issue count is extremely problematic. However, as a well-versed scientist in my field very familiar with numerous biomedical techniques, I can tell you that I am an extremely diligent reviewer and reject more articles than I accept with major revisions, not on purpose, but based on soundness of conclusions based on methodology and results. The editors of the journals are very receptive to my concerns and these articles I recommend rejection for are almost always rejected. As the previous commenter stated, the merits are based on the article and sometimes the journal, but boycotting a whole publisher does not have merit when there are careful scientists doing good science and when scientists are reviewing with care. I also personally don’t find the publishing speed to be problematic, I don’t need 6 weeks to review an article to revisit constantly. This merely breaks up my train of thought, if I have one week to do it I will find a couple hours to scrutinize and make comments.

          1. I completely agree with you. When I receive a paper for review and if I agree to review the article then it does not take me more than 2 days to review the article in detail, check for plagiarism and self-citation as well as the relevance of cited references. After all, those who don’t have time can refuse the offer to review the article. As a scientist, the practice of some journals that are even on the Web of science master journal list, which keep manuscripts for several months without any response to the authors, is really disgusting to me, and I have personally experienced such a situation

    2. That is not the answer, I am afraid. When I use a source to support my work, I cite it, reputable or otherwise. Not doing so, just because the source is MDPI, would be dishonest.

      1. This is of course true. The questions is rather, should a researcher rely on MDPI journals as sources? If you do, you should cite these sources, but it makes your own work look less credible.

    3. I seriously hope that people who would referee an article based on listed references are not selected for this task, especially by journals of high standards. Academic snobbery.

    4. high standards journal …everywhere the first thing is checking the names of the authors, if they are friends or famous.
      Pubblishing is about being part of a community and doing things accepted by that community. I did reviews on top mathematical physics/probability journals, the first and most important criteria was “the author level”. There some fields so small that you don’t want to reject paper of some guys.
      For how it works the system if you don’t start to pubblish papers with important guys you will not do your own way. In your good journals, I pubblished things that were total rubbish just because my advisor was friends of the editors. While after a rejection from PRX, I pubblished my best paper (both for math and physics idea) on Entropy (MDPI). The rejection of PRX was “the paper looks mathematically correct but it does not have enough impact.”. Two weeks after they pubblished a paper proposing to explain the emergence of consciusness from quantum gravity ! I would like to ask a neuroscientist about the impact of this… the real point is that to pubblish in this new top APS journal (claming to pubblish proposal for new paradigmas…) you have to do or Quantum Gravity or String Nets or Quantum information.. different idea are killed.
      Personally I felt that the review with Entropy (MDPI) was too fast and one referee did not go trough the technical part. But this happens really everywhere. Literally nowhere all computations are checked in details, because reviewers don’t have time and this one of the reason you have to be known to pubblish in your high standards journals!
      Do you really thinks that all Nature and Science papers are good, there are silly things also there and obscure data. Like the one about observation heat transfer in the vacumm because of Casimir effect, none can observe Casimir effect ! Or factorazing 15 ! (WOW!) but on a quantum computer it is interesting …)
      Anyway in Entropy (MDPI) there are papers which are regularly well cited in Nature and Science and are a reference in the field, see for example ” Quantum Thermodynamics: A Dynamical Viewpoint” from Ronnie Kosslof.
      I think my review in Entropy (MDPI) was too fast , but it is good having a place where to pubblish something that is not standard.
      Everthing one day will be replaced by other theory (apart Relativity Theory 🙂 ). The fact of something being accepted depends on what people think in that moment. Think about all these prizes and top papers on String Theory and what probably they will be one in the future ….. ? ( Me too I worked on some models of Stat. Mech. accepted in top journal, but they are totally stupid but accepted nowdays.. in future they will be nothing.)
      Often pubblications in some field go on just because the business became too big too fail..

      1. When you stick double b’s in every “pubblish,” [sic] you make me dizzy. One wonders how someone with those spelling abilities is even able to publish at any semi-serious journal, Entropy included. I am not a native speaker either, but this diatribe was hard to read…

        Besides, the whole “big name” argument is silly. It happens, yes, but if your work is (really) good, a (really) good journal will take it. Maybe not pseudo-journals and magazines like Science or Nature, but a good scientific journal (one with double-blind reviews, owned by a nonprofit professional association, and indexed by the WoS Core Collection) will take your properly done scientific work. No need to go the MDPI route if you are capable of producing good science, and you are actually capable to communicate that science in the language of the journal.

  3. I think but I may be wrong that lack of transparency in investigation and the unclear reasons for retraction and republishing without informing MDPI may make concern that either the journal in which the paper was published created a pressure on the author to satisfy the publisher. Or there is a kind of mutual business between publishers at the expense of science that led to this scenario. Evidence based investigation and quality control measures between different publishers in the issues raised may solve this ambiguity

    1. The 3rd possibility is that perhaps they were threatened to be sued and they didn’t want the headache.

      The 4th possibility is that perhaps after they were notified (or threatened), they checked the original paper again and agreed that it is *really* biased and much more offensive than it should be. So they asked the author to tone it down.

      1. If the 3rd possibility is correct it shows real spinelessness on the part of OUP.
        OUP is a major publisher, so they should have the financial capacity and insurance to withstand a lawsuit. They are also based in the UK, where peer reviewed publications have a specific defence in UK law (since about 2013).
        So if having money and the law on their side is not enough to stop them caving immediately to MDPI, I question if they are an appropriate organisation to steward any scientific literature.

        1. Maybe the 4th possibility makes more sense: The paper might have been really biased and deserved to be retracted and toned down.

          1. I am still inclined towards the third possibility. I suspect the threat of a law suit might have overwhelmed both the author and publisher and forced them to opt for the retraction of the article.
            Otherwise, MDPIs policy of making the names of authors known to the reviewers (with the possibility of reviewers also revealing their identity if they choose to) is, in my opinion totally inappropriate. Higher quality publishers operate their business solely with double-blind peer review processes.

  4. I personally both published in MDPI journals as well as reviewed for them. I find no issues with their review process. Yes, by default they ask all reviewers to submit a review within a week, but if one write to the editor that they need more time, it always allowed. I could easily do a thorough review within a week.

    Reviews received for my papers were also very thorough. Yet instead of unnecessary lengthy processing time you can turn around a good quality paper in a month, that is much more helpful than having papers going around for months and then being rejected without any peer review at all or get a rejection with positive reviews (in similar level journals).

    They do have a lot of special issues but it works slightly differently from normal special issues. They are part of normal issues and more like special topics, which actually helps organise papers and better find the journal for your paper.

    1. My experience with MDPI is fully negative. I’m from Russia and work almost 20 years in ultrasonic NDT. In December, I was invited to charge-free submit a manuscript to a special issue in “Applied Science” on nondestructive testing of composites. Almost two months after the submission of the manuscript, I’ve received two positive reviews and one review with requirement to reject our manuscript. The MDPI allowed me to reply to these two positive revisions and after that, they sent me one more revision (Round 2), where they asked me to write a detailed lecture on the basics on NDT of composites, the used laser-ultrasonic method, detailed explanation of practically each sentence in the article, to change each picture, and so on with the words “I do not understand, why you use this technique or that equipment, can you clarify this, and can you clarify that…”. No one question or remark was made on the scientific essence of the article. This means that the editorial and reviewer board of this journal is fully incompetent and the proposal to charge-free publication means that I must perform charge-free teaching of their reviewers and editors. In other words, they tried to steel my knowledge and experience. In addition, they state that my English is very poor, although I have been working as a professional translator for over five years, and practically all the editors of this journal, including the guest editor of this issue, are not native English speakers. In addition, even two of their reviews were written with grammar mistakes and typos. So, I think that it was an attempt to force me to pay for their English editing service.

  5. One way for MDPI to rebut the claims made in the original article would be to publish the Metadata relevant to the review process and associated with the 53 papers. One can see more clearly once the smoke is gone. It would be useful to know how many reviews were received for each article, how long it took, how many revisions were received and what were the times to first decision.

  6. I have published in and reviewed for MDPI journals many times and I must say the experience is a mixed bag. I have received very thorough and critical reviews for some of my papers, sometimes needlessly so, and I had to even painstakingly revise the paper three rounds before it was accepted. On the other hand, it was quite apparent that for a few of my papers, the reviewers had no expertise in the subject and they would sometimes make rather irrelevant comments. I have also received unreasonable reviews demanding I change my narrative review into a systematic review and my paper ended up getting rejected because I tried to explain to the reviewers that this is a narrative review.
    Having also edited, reviewed and published in PLOS ONE and Frontiers journals, I must say my overall experience with MDPI is probably more positive in comparison… My latest submission to Frontiers was reviewed by three reviewers who provided rather superficial comments (one just suggested three references for me to cite and another just said good). As a reviewer for Frontiers, I get removed from the reviewing process should I recommend rejection and this is bewildering to me.
    For PLOS ONE I often have to invite more than 50 reviewers to find even a single willing reviewer. The quality of the reviews turned in can also be exceedingly poor and this ends up delaying the whole process. I have seen papers stuck in review for a year or longer and this is absolutely unfair to the authors as the paper may end up getting rejected in the end for lack of relevancy or an outdated search strategy. One of my own submissions to PLOS ONE was stuck in review for >6 months and we eventually requested to withdraw it and publish it elsewhere (which we eventually did in a subscription-based Elsevier title).
    In my experience with publishing, I have had well-ranked Wiley and Elsevier journals that make a decision on my manuscript based on a single reviewer report (be it positive or negative). The reviewing process can be pretty ‘painless’ compared to MDPI and I also have papers that were accepted within a month from submitting for these subscription-based journals that are supposedly more reputable…
    To me, the crux of the issue is really whether Editors and Reviewers are doing their jobs and performing their gatekeeping function. It is really difficult to have good Editors and Reviewers since this is a pro bono thing and energy and resources are finite. I have had many invited reviewers and colleagues tell me that they would no longer review as they are simply fatigued by work demands and the surge in review requests (for PLOS ONE for example, I have had many reviewers tell me that they find most submissions poor and more should have been rejected at first cut rather than sent out for review, or that they receive more than one request to review a day, hence they have to reject all).

    1. I agree fully with your sentiments, I am also a reviewer for mdpi journals (IJMS, Nutrients, molecules, medicena, antioxidants, biomolecules) and have published in mdpi. I think it comes down to the editors and reviewers and nothing more than that regarding the quality of the review and quality of the articles published.

  7. Please apply the same to Elsevier journal. On the grounds of equality of publishing for every country. I have seen similar themed papers in journals but still editor of the journal said that manuscript is out of scope and aim and this is happening to many people. See i can get that if you say that its not upto the mark to be published in the journal but they give reasons like “” too many papers in pipeline”” “”we are selective”” and out of scope and aim

    1. In the last five years I have 7 published articles in Elsevier journals, I had negative reviews, but never had become such stupid reviews like in “Applied Science” of MDPI. For example, a reviewer of my article on nondestructive testing of polymer composites does not know what does the abbreviation “CFRP” mean. It was even not sad, but funny. Absolutely useless journal and waste of my time.

      1. It took me a fraction of a second to tap the acronym on my phone, swipe, and see the exact definition. It would have taken them zero effort to (at the very least) “pretend” to be knowledgeable enough in the subject matter to have had the minimal competency necessary to review it, and the reviewer wouldn’t even go that far? Not a promising anecdote, if accurate.

    2. Predatory? What does It mean? Predatories are all these Big companies such as Elsevier, Wiley, Sprinter…..that receive fortunes of the authors and institutions for publishing in their journals, while they do not pay an only dolar to the poor Reviewers. This business must finish. Scientists are silly.

  8. Since many people criticize MDPI for publishing articles after a low-quality reviewing process at the expense of speedy decision; the solution to that can be:
    1- Announce the reviewers’ names on each publication like Frontiers journals.
    2- Publish the review reports like BMC journals.

    1. This is already an option in MDPI. It depends on the authorization of the reviewers and the authors for both cases.

    2. Along with the published paper, MDPI reveals the reviewers’ names by default, that is unless they choose to be anonymous.

    3. Yes, and perhaps MDPI could pay good reviewers for their work and charge others to do it in exchange for a certificate.

      1. Nice suggestion. MDPI gives a 100-Frank discount per review to all reviewers. Of course, it is not real money, but it may help if you want to publish an article with them. I have many discount vouchers that may make one article of mine free or at least very affordable (although I haven’t used them at least yet). And the certificate is obtainable for all reviewers through the “Publons” branch of Web of Science.

  9. For sure MDPI is not perfect, just like other publishers like ELSEVIER and Springer alike. The article in question I personally read it when it was published, and I had a feeling that it would bring problems one day.
    To me, publishing such an article is not ethical because it was like more of a direct attack.

  10. I believe that researchers have one way to publish their high quality work through the fast response MDPI Journals.However, the fees are too much, which are unethical and effecting global economy.

    1. I agree with this. It’s important to remind everyone that at the same time, Lancet’s eClinMed requests reviewers to return their reports in a week and charges authors a whooping 5,000 USD 😶

  11. It is somewhat contradictory for an individual to assert that MDPI fees are excessively high. Such individuals should consider the actions of esteemed publishers who have recently introduced a rapid publication process, with a turnaround time of three weeks, while requiring authors to pay approximately $7,000. I hold a dissenting view from the prevailing belief that a thorough review process must necessarily be time-consuming. In my most recent publication with MDPI, the paper underwent a meticulous examination by three consecutive reviewers. Every section, from introduction to conclusion, underwent rigorous scrutiny, leaving no paragraph untouched. Remarkably, this comprehensive review was completed within a few weeks. On the other hand, I have also submitted manuscripts to other so-called “reputable journals,” where they languished for approximately seven months due to the failure of the second reviewer to submit their report. Based on my experience with MDPI, it is evident that a thorough peer review can be efficiently conducted within a short timeframe, while a peer review lasting several months may still fall short of expected standards. Therefore, I propose conducting a comparative analysis of review reports between MDPI and other publishers of similar standing. Criticizing a publisher without substantiating data is an unscientific approach that should be avoided.

  12. Hummm…. and what about the reviewers of the retracted paper? They did not see the potential bias or methodological issues raised that lead to the retraction?
    I have mixed feelings about this. I had a bad experience in a “reputable” journal where a publication directly attacking the work of others appeared. When we confronted the editors with the mistakes of the article (literally stating the opposite of some of the papers analyzed), they almost didn’t allow us to reply. But in the back of my mind I was thinking, who reviewed this in the 1st place?
    So… even reputable journals have bad articles and reviews!!!
    Like the speed of MDPI…. hate the cost

  13. MDPI is effectively predatory – the quibble is merely about the definition of predatory. Maybe MDPI might not be *as* predatory as acknowledged publishers/journals, however in addition to increasing instances of complete rubbish being published and my own experiences with their dodgy reviewing procedures (as a reviewer), they now appear to be artificially elevating the H-indices of people who act as “guest editors”. I looked up one “guest editor” who solicited yet another article for a special issue, and saw an H-index of 28 and figured, ok, maybe this is a proper scientist. Then I saw that one of their very recent publications had been cited 50+ times, and was merely a 1 page commentary, with >90% of the citations being MDPI journals. Another 2 of their papers just from the last 2 years that was cited over 100 times were also >90% MDPI journals. From first hand knowledge, other “reviewers” suggest to authors to include MDPI papers in their revisions, showing how easy it is for them to give some of their guest editors the veneer of respectability.

    1. It is technically impossible and also irrelevant to falsely inflate the guest editors’ reputations. Because of these reasons, it is not possible: (1) Falsely boosting 3 articles of a guest editor does not give him any veneer of respectability. Three papers each cited one million times will increase the editor’s H-index by no more than 3 points! Without those 3 articles, his H-index would be 25, which is not bad compared to 28!
      (2) Not to mention that technically speaking, it is not even possible to boost someone’s H-index on such short notice. When someone is fixed as a guest editor, the special issue begins shortly after (a couple of days later). There is no time for MDPI to do those veneerings.
      (3) Those 3 example articles you mentioned were published long before that person had become a guest editor at MDPI. So they (the papers along with their citations) have nothing to do with any conspiracy on the MDPI’s part.
      Because of these reasons, this is irrelevant: (4) MDPI does NOT need to veneer its guest editors. It doesn’t even care. There are many rookies on the list of MDPI’s guest editors, and anybody already knows this.
      (5) I can’t find any H-index mentioned for any of MDPI’s guest editors I am checking. Are you sure MDPI even boasts its guest editors’ H-indices? Perhaps this is the case for some journals but not all.
      (6) I have reviewed a lot for MDPI. Never ever have I told any author to cite any papers by MDPI, myself, or any editors’, nor has any other reviewer that I am aware of done so. Of course, some reviewers may go the “dark” path of “coercive citation”. But they mostly coerce the authors to cite their *own* papers and not MDPI’s or its editors’ papers. So it is very strange to even think that reviewers care for veneering the guest editor and ask the authors to cite the guest editor’s papers.
      (7) From my experience as an active MDPI reviewer, I have never ever seen MDPI encouraging any authors to cite their papers. Maybe some editors do so secretly, but MDPI is not to blame unless MDPI knowingly tries to cover up for them. And this will not happen because if a house of publishing knowingly gets involved in any “coercive citation”, the big bosses (Clarivate, Medline, and Scopus) will expel it in an instant. So does MDPI take the risk of getting excluded only to be able to veneer a guest editor?! (Not to mention again that this is not even technically possible).

      1. As you need it spelt out, and didn’t realise I was merely pointing out a few examples:
        This author I referred to has TWELVE papers published in MDPI journals in the last THREE years that have been cited over 20 times with the VAST majority (>90%) of those citations being from MDPI journals. Without MDPI this person’s H index would be 16.
        Just because you have “never ever” seen them request MDPI citations to be added after review, I personally have. In fact, this “reviewer” suggested 4 MDPI papers that were NOT EVEN RELEVANT to the subject matter of the paper under review.
        Let me guess, you publish a lot in MDPI?

        1. Well that particular reviewer has definitely practiced “coercive citation” which makes his/her review moot and even misleading (because of a serious conflict of interest coming with coercive citations). But that has nothing to do with MDPI being bad or MDPI trying to veneer its editors, etc. I already addressed this argument of yours in my point (7) above. So please read carefully and try to be logical.
          And as for your “guess”, accusing me of having beef with MDPI: I have not published even 1 article in MDPI. I have reviewed for them a lot though.
          NOTE: I don’t say MDPI is bad or good. They have many many many REAL drawbacks. All I am saying is that *your* critique (that they try to inflate their guest editors’ H-index) is entirely irrelevant and impossible on so many levels.
          It shows that you are not well familiar with how H-index works, or how journals work. If you want to criticize MDPI, at least get yourself familiar with journals’ inner workings or concepts like H-index etc., and then logically and correctly criticize their REAL drawbacks (which are many).

        2. Hmmm, I’ve reviewed for MDPI many times and have also served as a guest editor once. I can attest that I have not experienced any ‘artificial’ inflation of my reputation or h index from my participation as a guest editor.

          To your other point, coercive citation happens everywhere; I have personally dealt with a Q1 Psychiatry journal (Elsevier title), where the Editor-in-chief explicitly told authors (happened to both my submissions to the journal) to cite his editorial and papers in the journal before acceptance can be advised.

          1. Well said! It is irrelevant and ridiculous by all means to even attempt to inflate the H-index of a guest editor.

            Regarding coercive citations, sure it is practiced a lot; but it is still misconduct and abuse. The fact that even Elsevier editors practice it shows that we have a lot of work to do!

    2. I would not say MDPI is completely predatory. I would say they pay some authors to write good papers and rank their journals while charging many others for publishing crap. Overall the journal looks well because of the legit papers while also including a bulk of predatory articles (or crap articles published with predatory practices) to the point that they do not affect the calculation of the indicators. Their strategy is almost perfect considering that those prestigious authors they pay are going to vouche for the journal (in the end, their papers are good). Smart, really smart…

      1. I am not aware of anyone who has been paid by MDPI to write articles for MDPI. Do you know any such examples?
        MDPI may waive the fee for some very good articles (100% free), but that’s it. Paying the authors? I haven’t heard of it being practiced by any publisher including MDPI. (Also MDPI gives 100-Frank discounts for each review).

  14. All publishers are predatory, leeching honest taxpayer’s money, exploiting scientists in doing donkey work. Their only motive is to make billions in profits.

    Lets boycott publishing in scientific journals and instead promote more innovative modes of scientific dissemination, like Wellcome Open journal.

  15. All private publishers are parasites, as they exploit the work of scientists (authors and reviewers alike) for profit. The switch to the open-access paradigm (also promoted by our bureaucrats) seems just another way to counter profit loss due to peer to peer sharing (duh ! if I need a paper behind a pay-wall, I simply send a friendly message to my colleagues ! That is, if the paper is not unlocked by a certain service).

    Predatory journals took one extra step by removing the quality control (= you pay, you are published whatever crap you produced).

    MDPI is simply in between of the two models. They simulate the review system, but actually they will send the review to 4-5 reviewers simultaneously, then pressure the reviewers to submit a report fast (through guilt: “the other reviewers already gave their report !”). As soon as two reviews are in, they will send to the trash bin all the efforts of the “late” reviewers (“we don’t need you anymore”).
    I experienced it two times, I won’t experience it again since the editor is on my personal blacklist now.

    I suggest authors to consider this angle: submitting a paper to MDPI is a mark of disrespect for your colleagues. It also suggest that you would also provide boatched reviews to keep up with MDPI requirement (or simply would not accept what you are asking from others…).

  16. The work is inconclusive, because no “control group”. They should have compared the MDPI publications to Elsevier publications or Wiley publications. Or: Royal Society of London publications.

  17. A quote from my email (03. 11. 2023) send to the editorial office in my function as reviewer: “Obviously, the authors of this manuscript have not addressed the concerns and I recommend rejection of the paper”, which was then published on 13. 11. 2023:

    Kot I, Lisecka M, Kmieć K, Golan K, Górska-Drabik E, Kiljanek T, Zimowska B, Skwaryło-Bednarz B. Visitation of Apis mellifera L. in Runner Bean (Phaseolus coccineus L.) and Its Exposure to Seasonal Agrochemicals in Agroecosystems. Agriculture. 2023; 13(11):2138. https://doi.org/10.3390/agriculture13112138

  18. When I see an article is published in MDPI now, I just ignore it unless there is something directly potentially useful in to make looking at it closely worth the time. Same with Frontiers. They’re basically journals that publish preprints.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.