A rating system for retractions? How various journals stack up

publications-logoHere at Retraction Watch, we judge retraction notices every day. We even have a category called “unhelpful retraction notices.”

But we haven’t systematically analyzed those notices, so lucky for us, a group of academics at Vanderbilt decided to. In a new paper published in a special issue of Publications — an issue whose editor, Grant Steen, put out a call for papers for here on Retraction Watch — Emma Bilbrey, Natalie O’Dell, and Jonathan Creamer explain:

This study developed a novel rubric for rating and standardizing the quality of retraction notices, and used it to assess the retraction notices of 171 retracted articles from 15 journals. Results suggest the rubric to be a robust, if preliminary, tool. Analysis of the retraction notices suggest that their quality has not improved over the last 50 years, that it varies both between and within journals, and that it is dependent on the field of science, the author of the retraction notice, and the reason for retraction. These results indicate a lack of uniformity in the retraction policies of individual journals and throughout the scientific literature. The rubric presented in this study could be adopted by journals to help standardize the writing of retraction notices.

(Disclosure: One of us (IO) reviewed this paper for the journal.)

Here’s the rubric:

0—No reason for retraction can be discerned from the notice.

1—The reason for retraction can be inferred but is not stated clearly through the naming or definition of a category.

2—The reason for retraction is clearly stated, but explanation is not given as to how the rest of the article was affected by retraction.

3—The reason for retraction is clearly stated and explanation is given for if and how the entirety of the article was affected by the fault.

After the initial separate ratings, the three authors reviewed the retractions with disputed scores. This allowed discussion of exceptions to the rubric, produced a consensus rating of all retraction notices, and increased the clarity of the rubric.

The authors applied this rubric to 15 different journals in five different areas, matching a high-impact title with at least one low-impact title in “biological chemistry, cellular, medical, multidisciplinary, and physics.” They found:

For lower impact journals 10.5% rated 0, 2.6% rated 1, 71.1% rated 2 and 15.8% rated 3. In higher impact journals, 28.7% rated 0, 4.6% rated 1, 34.3% rated 2, 32.4% rated 3.


Science and Cell were the only journals whose mean retraction notice ratings were significantly above the grand mean (p < 0.05), while both Annals of the New York Academy of Sciences and Journal of Biological Chemistry were significantly below (p < 0.05). When comparing high impact journals, Science, Cell, New England Journal of Medicine and Biochemical Biophysical Research Communications had significantly higher ratings than Journal of Biological Chemistry (p < 0.05). Annals of the New York Academy of Sciences was the only journal to differ within the low impact journals (p < 0.05), though the general lack of variation between these journals may well be due to less of retractions to compare.

The Journal of Biological Chemistry — which makes frequent appearances here on Retraction Watch — got a special mention, both for good and bad reasons:

Most journals were not consistent in the quality of their retraction notices. Journal of Biological Chemistry was an exception, but their notices were consistent in their lack of information. Journal of Biological Chemistry frequently publishes notices, stating: “This article has been retracted by the Publisher.” Journal of Biological Chemistry also contributed to why there was not a difference in notice ratings for higher and lower impact journals. Having said this, the publisher of Journal of Biological Chemistry has since hired a manager of publication ethics to help oversee the writing of retraction notices [19], and initial results are promising [20]. Though beyond the scope of this study, it would be interesting to compare notice ratings before and after this appointment.

Other future directions, they write:

…could include looking at changes in notice quality over time for specific journals, comparing the notice quality for different types of error and fraud, and comparing the current rubric with a direct assessment using the COPE guidelines of the same retraction notices.

At the end of the paper, they cite a piece we published in The Scientist (and yes, the passage and citation were in the manuscript before we reviewed it):

A 2012 opinion article in The Scientist by the authors of Retraction Watch called for the creation of a “transparency index” to rate how transparent journals are about the accuracy of articles [21]. They suggest it be formed using numerous criteria, including the journal’s use of preventative measures like plagiarism-checking software and “whether corrections and retraction notices are as clear as possible, conforming to accepted publishing ethics guidelines such as those from COPE or the International Committee of Medical Journal Editors.” The current attempt to quantify the quality of retraction notices could help in creating such an index.


14 thoughts on “A rating system for retractions? How various journals stack up”

  1. Am I just misreading, or are they seriously ranking BBRC among the “high impact” journals? That must be a mistake, considering that the supposed “low impact” journal ANYAS has a mere 0.1 point lower impact factor than BBRC.

    1. This does appear to be a mistake, from the paper: “At least one low impact journal was matched to the high impact journal in a scientific field to assure multiple retraction notices from both high and low impact journals in that field
      (Table 1)” and JBC and BBRC are the only journals listed in “biological chemistry” (table 1), yet “When comparing high impact journals, Science, Cell, New England Journal of Medicine and Biochemical Biophysical Research Communications had significantly higher ratings than Journal of Biological Chemistry (p < 0.05)."

  2. I would guess that most retractions are lawyer proofed and thus are somewhat bland in nature, and would take a reader with a preceptive eye to understand the exact nature and intent of the retraction.
    I would rather see a paper/research/analysis for retractions with a ranking system. As it stands now a simple math error is listed as a retraction and lumped in with every paper by Diederick Staple.
    As with everything some errors leading to retractions are more harmful then others. A simple numbering system. With 1 being a minor error (that was/were caught by the author(s). To errors that lead to faulty a paper. Then onto plagiarism (with self plagiarism being below the plagiarism of others) onto minor fraud with the last being outright (go to jail, do not pass go, do not collect 200 dollars) fraud.
    This is common in everyday life, “Consumer Reports”, criminal and civil law, even apps on phones and computers are subject to ranking, why not retractions.

    1. My concerns about this paper:

      a) A detailed list of the 171 retraction notices was not provided, and should have been, even if as a supplementary file online. These notices, or links to the web-pages where they exist (even if behind a fire-wall) are the evidence of the study based on which all claims are being made. At minimum, the 171 studies should appear in the reference list. In essence, how are scientists and the public supposed to believe a study that does not provide the evidence based on the retraction notices on which it was based? This is like providing the raw data for a scientific paper. How can we evaluate the accuracy of your study if the actual papers are not listed? We, too want to examine those retraction notices.

      b) The authors do not explain EXACTLY how the choice of the 171 journals was made. Just a rough scan of the list in Table 1 already indicates bias (i.e., why was Science targeted, but not Nature?). They do describe a protocol, but as Marco points out, why not ANYAS? Why not other fringe journals or of other publishers or with slightly higher or lower impact factors? So, the selection criteria appear to be flawed, and can be perceived by those journals that have been selected as being biased against them while those that were not included were saved of any scrutiny. How can you with confidence claim no bias (or total randomness) in the selection?

      c) A scaling system of 0 to 3 is provided, but absolutely no mention of the word “quantification” in any of these levels, as if the level of ethical breach were not an important factor to consider.

      d) Why has the publisher of these 15 journals not been indicated? And why was the selection not made instead based on publisher, rather than on the criteria they did use?

      e) The authors state “The purpose of this study was to create a rubric based on the COPE guidelines to quantify retraction notices”. But this is clearly not true. If one observes page 1 and page 2 of those COPE guidelines (http://publicationethics.org/files/retraction%20guidelines.pdf), COPE notes that retraction notices should: 1) “be published promptly”; 2 “be freely available to all readers (i.e. not behind access barriers or available only to subscribers)”. The authors have analyzed neither of these key aspects to COPE’s guidelines, so how can they claim their study to be based on COPE’s criteria? Is this just a cheap tactic to validate their study because it has some “official” link to COPE? In fact, can we also get an explicit statement from the authors and from COPE that this is not a COPE-funded study and that in fact no funding was obtained to complete this study. The COI statement makes no explicit mention of this.

      f) Most likely my biggest concern is the use of the term “fraud”. The authors stated “The reason for retraction (fraud or error) was determined based solely on the information provided in the retraction notice.” This is an extremely serious claim because I have rarely (maybe never?) seen the word “fraud” in any retraction notice. I thus ask why the authors could claim fraud by authors if the retraction notices and the COPE guidelines quoted in reference 3 do not use the word fraud in them? The authors claim to base their study on the COPE guidelines, but then they classify quite a large percentage of the retractions as being based on fraud, but COPE does not use this wording, so surely this annuls their claim that the study was based on the COPE guidelines? As far as I can tell, RW has also been pretty careful about the use of the “F” word on this blog, usually moderating entries to avoid libel by claiming fraud using this term. Therefore, I am confused, how did the authors conclude that fraud was committed, for example in Figures 3 and 4? This issue actually become extremely pertinent because what the authors have in essence stated in their paper is that the authors of 16% of the papers whose retraction notices they had analyzed, have committed fraud. Can the authors please also provide clear documentation about these papers (e.g., lawyer’s documents, publisher’s claims, authors’ admissions, etc.) that would prove undeniable fraud in these 16% of studies. These documents should also be added as a supplementary file.

      This study does provide some new insight about this topic that draws all of us to RW, but judging by the errors in experimental design, clear bias in journal selection and other flaws outlined above, including the clear naming of fraud in 16% of cases, I don’t see how this can be hailed by the academic community as being an accurate or even representative study. Are we supposed to say that something flawed is better than nothing at all? I am all for a critical appraisal of the retracted literature, but only when done carefully.

      1. All important questions, but I hope you didn’t do all this analysis just to post here! If you have not already done so, please submit your critique as a letter to the editor and share with us any replies…

        My preference would be for pub med and other indexing services to color code the font in retraction notice links whenever they add one to their records so you could see at a glance what the retraction notice discloses:
        Black as in black box if there is no info in the published notice about the reason, equivalent to zero in this study.

        Red if the retraction notice mentions any form of misconduct
        Green if the notice makes clear that retraction was for reason(s) other than misconduct such as error.

        PubMed staff are quite good at the mesh coding of papers for lots of other distinctions and so hopefully could do this routinely whenever they add a retraction link.

          1. Below, please find a verbatim, unedited letter of complaint I sent to MDPI a few moments ago.

            Complaint: MDPI (Publications), a confirmed predatory OA publisher

            Dear Dr. Martyn Rittman, Chief Production Editor, MDPI

            NOTE: 2 documents will be sent to all in e-mail 2.

            My claim: the rejection of my Letter to the Editor was purely political revenge and not based on any academic grounds.

            My call: Boycott MDPI. Remove yourselves from the editor board to avoid professional injury from this association. Retract your papers to send a strong message that this kind of bias and behavior cannot take place in publishing.

            Other basic requirements: I think Retraction Watch needs to address the exact links it may have with MDPI and the cozy relationship with Prof. Grant Steen, giving him ample coverage on RW. MDPI also needs to explain exactly why they vetted Ivan Oransky as a peer reviewer for this paper, and why this was clearly not perceived as a conflict of interest, especially since this special issue got immediate coverage on RW while other stories in plant science have been relegated to the back seat. MDPI also needs to make the peer reviewers’ comments available so that we can judge if there were any rejections and why the issues that I picked up were not picked up by these so-called professional peer reviewers, or even elite editor board members. Prof. Grant Steen and Prof. John Reggazi should also make their positions clear.

            My proof: Please read the information below and my two documents very carefully.

            I am astonished by the decision that was reached on February 6, 2014 about my letter in response to a paper published in the Grant Steen special issue of Publications (http://www.mdpi.com/2304-6775/2/1/14), co-authored by Bilbrey, O’Dell and Creamer, published by MDPI, the Sino-Swiss open access publisher. In fact, I did not respond immediately because I had many other things to do. When I tried to access my MDPI account on February 7, I found that my account had been blocked. Two weeks later, despite my request for information (see e-mail below), the publisher has not had the dignity of responding and providing a reason for why my account was blocked. Moreover, I have complained once before about the peer reviewers’ decisions on a paper submitted to their journal “Plants”, which was followed by a really unprofessional and aggressive e-mail. These three incidents, coupled with the very recent news that MDPI is now an official “predatory” open access publisher, as defined by Jeffrey Beall (http://scholarlyoa.com/2014/02/18/chinese-publishner-mdpi-added-to-list-of-questionable-publishers/) has now forced me to respond to this latest unscholarly rejection.

            I wish to explain why I am not pleased with the behavior of this editor, Prof. Grant Steen, and his host publisher, MDPI. The paper by Bilbrey et al., although covering a very important issue, contained, as I saw it, some fundamental flaws, the most important of which was the introduction of the term “fraud” into their new classification system. I promptly posted some of my concerns on Retraction Watch (http://retractionwatch.com/2014/01/27/a-rating-system-for-retractions-how-various-journals-stack-up/). A blogger encouraged me to submit a Letter, which I did, on January 30, 2014. Since the journal did not have any guidelines with respect to Letters to the Editor (http://www.mdpi.com/journal/publications/instructions), I submitted my letter to Prof. Grant Steen, the Guest Editor, The Editor in Chief, Prof. John Regazzi and one management e-mail. Much to my surprise, I received an almost immediate response from Dr. Martyn Rittman, Chief Production Editor, but not from any of the editors. At that time, I did not think much of this. A few days later, on February 3, 2014, Dr. Rittman indicated that the letter could be accepted, but only if reduced to one page. I was a little irritated with this because how could I reduce a paper of 6-7 pages into one page and still make the same detailed argument? Moreover, page limits seemed redundant for an open access journal. Finally, where in the Instructions to Authors does it claim that a letter to the editor should be only one page long? Despite these three valid reasons to protest, I decided to make the edits and resubmit, also on February 3, 2014. On February 6, 2013, MDPI, specifically, Dr. Rittman, rejected the letter.

            I have the following questions:
            a) Why did an editor board member not handle my paper but rather a management figure?
            b) What decision, if any, did Prof. Grant Steen play in this decision? This is important because I have been fiercely critical of Prof. Steen and his unfounded bias in editing before. So, I want to know, in black and white, if this was just an act of revenge?
            c) What academic basis was there to the rejection? The rejection sounds much more political to avoid damage that one of the very few papers published in this “ethics” special issue should actually be grossly flawed.
            d) I believe that the correct way is to address the journal, who should then confront the authors. What MDPI did was completely unheard of. They told me to contact the authors directly and seek a resolution through the authors. What then is the value of a letter to the editor, if I cannot openly and publically critique a paper published in your journal?

            As you can see, I am extremely displeased by this situation. Where am I supposed to publish my concerns about the factual content of a paper in your journal, if not in your journal? Am I supposed to team up with the authors of the paper who I am critiquing?

            Consequently, I have e-mailed all editors to share of this purely non-academic scandalous behavior by MDPI. If any authors who have published there have any scruples, they would withdraw their papers immediately from this scam journal, with a scam “editorial” system, which is run by management. I call on Bilbrey et al. to please correct their paper appropriately, and to respond to my queries in the original version.

            From my own personal experience, I consider the publishing practices of MDPI to be highly questionable, very unprofessional, grossly non-academic and totally biased. This in itself should annul the very existence of this one-noun journal, Publishing.

            Judging by the number of nouns in the English language, I assume that business will be very good for MDPI in the future. You may take this “sayonara” e-mail to imply that you will no longer need to activate my account, either. Fortunately, there are many other publishing options, including self-publishing.

            Finally, I look forward to a response from all parties queried, authors, publisher, editor board and Retraction Watch. I will be publishing this Letter and critique elsewhere, and will also include this entire communication and e-mails. It is time to bring accountability to the table, and show the truth, loud and clear. And everyone should have a fair say.

            The worst and most ironic part of this is that MDPI is a paying member of COPE: http://www.mdpi.com/journal/publications/about I have long stood firmly against this commercialization of ethics. It stains publishing black and purposefully murkies the waters to make the boundaries between finance and ethics unclear. If COPE receives money from MDPI, then surely COPE must also be held accountable for its members behavior? Either that, or revoke membership.


            Jaime A. Teixeira da Silva

          2. An addendum. I wanted to find out the exact identity of the individual who had rejected my valid query of the paper published in his journal. Turns out that Dr. Martyn Rittman (http://www2.warwick.ac.uk/fac/sci/moac/people/students/2004/martyn_rittman) obtained his PhD in 2009, has only a total of four published academic papers (see PubMed.org), all published from 2009-2012, at least among Elsevier’s Sciencedirect, SpringerLink, Taylor and Francis and Wiley-Blackwell data-bases. I thus ask, quite frankly, with what authority, and under what specific clause in the instructions for authors, was my carefully considered critique denied publication in Publications by MDPI, Dr. Rittman?

          3. Below, please find a verbatim, unedited letter I sent to MDPI a few moments ago in response to the letter of explanation offered by Prof. Grant Steen, the Guest Editor of this special issue.

            Dear Prof. Grant Steen, (in fact, should I call you Mr. Steen?)

            Thank you, finally, for responding, as you should, as the Guest Editor of this special issue. What a beautifully crafted response, which must surely have needed the input of MDPI lawyers and management to reign in your sharp tongue which so characterizes your previous 2013 communications with me.

            Considering that my complaint is actually quite “fresh”, and that all editor board members have already received a copy of both versions, I think it would be wise to wait a few more days, perhaps even a week or two, to assess the response by other editors, who I hope will also come forward and comment, not only on your and MDPI’s mishandling of my Letter to the Editor, but also about the claims and criticisms of this publisher, MDPI, that you have selected to represent this important data by several “ethics” specialists.

            To be honest, considering the level of misconduct that is now starting to emerge from MDPI-related journals (for example, http://retractionwatch.com/2014/02/24/nine-year-old-plagiarism-allegation-leads-to-retraction-of-math-paper/#comments and http://scholarlyoa.com/2014/02/24/under-pressure-mdpi-tries-to-clean-house-retracts-paper/), I am not sure if in fact I am interested in having my Letter published by Publications, as I would not like my name to be officially associated with MDPI. Moreover, I am quite sure that I would no longer receive a fair and unbiased peer review.

            Moreover, the authors, Bilbrey et al. have not responded to my critiques either, so I will first wait for their official rebuttal and perhaps ask them to take the initiative of correcting their own mistakes, and then acknowledging me in their Letter to the Editor. This would in fact be the most appropriate form of action, a self-correcting measure following self-reflection.

            I do agree that all ideas must have a starting point, but your special issue addresses key aspects of the core of science publishing, including accountability, professionalism, peer review, transparency, accuracy and corrections. To date, from this experience, I have seen absolutely none of these aspects being practiced. Because of the ethics-centered nature of your special issue, it would make sense to be extremely careful with what is published, wouldn’t you agree?

            You make a false comment. I submitted two evidence-based case studies in October (about) 2013, which you flatly rejected, without peer review, and with very personalized unprofessional commentary. They were not massive data sets, but they were extremely pertinent, nonetheless, but given my activist nature as displayed here (http://www.globalsciencebooks.info/JournalsSup/13AAJPSB_7_SI1.html), you were too afraid to even consider my opinions and papers. Would you like me to share those comments with the entire editor board, including the two submitted papers? Therefore, your thinly veiled attempt to repair the damage that you have caused to the image of your own special issue, even possibly criticized subtly by Miguel Roig on Jeff Beall’s blog, quite entertainingly (see full quotation at end of my e-mail), does take one step toward repairing the situation, but does leave a permanent scar, nonetheless. To be honest, it really irritates me to see double-faced professors like you who show one face now under pressure and another face when imposing your rule of law in paper selection.

            Finally, as you know, fraud implies a legal aspect of misconduct, as I showed through footnote definitions. That means that most retraction notices indicate misconduct, but they precisely avoid using the word “fraud”, precisely because to use the word fraud must show intent to cause harm though a lie. What Bilbrey et al. did was introduce a term that was not officially part of the original 171 notices (I hypothesize) and I call on them still now, to prove that the word “fraud” was used in any of the retraction notices. Until the 171 retraction notices are posted as supplementary material, either as verbatim quotes, or as supplementary PDF files, my Letter and this case cannot advance because there is NO EVIDENCE. The lack of supporting evidence is a serious flaw of the Bilbrey et al. paper and of your editorial oversight. You are making a very serious mistake by supporting Bilbrey et al. in claiming that all misconduct reported in retraction notices are equivalent to fraud. A VERY SERIOUS MISTAKE.

            It is this last mistake that has now confirmed what I was originally concerned about, your ability to serve as an unbiased, “ethics” professional. When the Guest Editor of a special issue on publishing misconduct and ethics assumes that all misconduct is by association fraud, without any proof, then we must be concerned not only about that individual (in this case you, Prof. Steen), but also the papers that he/she is publishing or editing.

            We don’t have to be “ethics” specialists of “ethics” faculty to understand some basic aspects of ethics, Prof. Steen.

            As I say, my final decision to re-submit a Letter will depend on the feed-back of other editors and the original authors. I will give a reasonable deadline of 2 weeks to respond, after which I will make my decision. But let me be clear, I will not submit some 500 word limit letter. I will submit the original version, with some tweaks, based on the feedback I receive, and on the 171 supplementary PDF files or retraction notices that the authors allude to as their base for this “rubric”. If necessary, I will publish this as an open access case study. Your web-site continues still not to list the guidelines for Letters to the Editor, so your numbers and rules are purely fictitious. You fail to understand my argument. This is not a negotiation like in a flea market, I am claiming that word count is a useless parameter for an open access journal and your nonsense imposition of a 500-word word count does not exist on your web-site and is another weak attempt to silence my voice of discontent.

            If no editors respond, or if the authors stay cocooned waiting for their guest editor to provide defense about their own study and data, then so be it. Silence will then act as the judge.

            Finally, you have failed to declare conflicts of interest, especially related to MediCC, and please explain why none of your editors, some in extremely powerful positions and companies, have no COI statement. For an ethics-related journal, or special issue, this would surely be the minimal requirement, wouldn‘t it? At least, from the eyes of a on-ethics specialist, that would be my perception…


            Jaime A. Teixeira da Silva

            Miguel Roig quote on Beall blog: “Some months ago I received an invitation to contribute to a special issue of one of the MDPI journals that was to be edited by a well-known contributor in my field. At the time of the invitation I was a bit suspicious because I had never heard of the journal and although it had not been listed in Jeff Beall’s list yet I felt somewhat uneasy about contributing to what appeared to be just another relatively new OA outlet with unknown staying power. I was getting ready to email the editor of the special issue to confirm that that individual was, in fact, heading the project when, coincidentally, the special issue was announced in a very trusted blog. So I agreed to contribute. Weeks went by, but due to a family health crisis I neglected to continue to work on the project until I received a reminder from the journal’s editorial office 3 weeks before the submission deadline. Then, a couple of days later on Feb 18th my co-author alerted me to Jeff’s new entry. Based on Jeff’s diligent work and on the various contributions to the present discussion (both, for and against MDPI), I decided that there were sufficient grounds for me to avoid this publisher altogether and have, therefore, withdrawn my offer to contribute. Some of the MDPI journals, perhaps including the one that I was going to submit to may, in fact, work under a genuine system of editorial and peer review. However, for me, the question arises as to whether I would want to be in any way associated with an ethically questionable publisher like MDPI. Of course, I am not naïve and fully recognize that no journal or editorial process is perfect and that there are ‘issues’ with just about every single major publisher and journal, as well as with the current peer review system. But, I think in this case there are just way too many of those ‘issues’ and lots of unanswered questions. Anyway, THANK YOU very much, Jeff. I think that your work has done a great service to science and scholarship. And thank you all for contributing to this discussion.”

            On Wednesday, February 26, 2014 11:27 PM, Grant Steen wrote:
            Dear Mr. da Silva;

            Thank you for your several submissions to Publications, including your recent Letters to the Editor.

            As you note, we have no guidelines regarding Letters to the Editor, because we did not anticipate receiving such. This lack of policy may have been an oversight. If Letters to the Editor represent the dialogue that any paper aspires to create, then publishing such letters promotes that dialogue. I am therefore in favor of publishing thoughtful letters related to any of the papers that we have published.

            Nevertheless, it was my decision not to publish either of your Letters to the Editor, about the paper by Bilbrey et al. The first letter, which you attached to your e-mail, is far too long (at 1,834 words) to publish as a letter; this is a paper and we can review it as such, if you wish. The second letter is a more reasonable length (at 411 words), but it did not make a substantial point. Readers need not take my word for it; each can make a judgment, as you have circulated both unpublished letters to all parties.

            My decision not to publish your letter was not an act of revenge; I was and am unaware of any criticism by you of my bias as an Editor, other than this recent exchange. The Bilbrey paper is, as you note, both timely and important, because the quality of retraction notices varies greatly between journals. The selection of retraction notices reviewed by Bilbrey et al. does not seem an issue to me, because a range of journals with different impact factors was evaluated. Each new effort has to begin somewhere, and the selection of journals evaluated seemed fine to me. I have no objection to publishing a list of the 171 retraction notices reviewed as supplementary material, but that is a decision better left to the authors. I believe that COPE guidelines did inform the Bilbrey criteria, though how promptly a paper is retracted probably depends more upon how promptly the need for retraction is noted than it does upon how quickly a journal acts.
            Finally, it does not make sense to take umbrage at use of the word “fraud” in discussing retraction; most retractions actually are a result of fraud.

            If you wish to rework the shorter letter, sharpening it to make a few well-reasoned points, we will consider it again. A letter should be no more than about 500 words, with just a few references; otherwise it begins to assume the dimensions of a paper. If referees score the letter sufficiently high to publish, we will offer Dr. Bilbrey (or other authors of the paper) an opportunity to respond.

            R. Grant Steen, PhD
            Guest Editor, Publications

          4. Retraction Watch has no links with MDPI other than one of us (IO) having served as a peer reviewer on this paper, which was disclosed in the post. We have covered retractions in their journals.

            Our relationship with Grant Steen has consisted of a number of email exchanges since we launched Retraction Watch. At one point, he asked us to consider collaborating on a project, and he has also invited us to contribute to the special issue of Publications that is referenced in this post. We had to decline both invitations because of a lack of time. We have also of course covered his studies of retractions, and have quoted him.

          5. Thank you Ivan, for that extremely important information. I praise RW for that full transparency. That clears up one chunk of unknowns. As soon as I get a formal response from MDPI, Prof. Steen and the authors, I will inform RW bloggers. The inability to critique papers published in MDPI journals, like Publishing, through valid Letters to the Editor, could be saying volumes about how MDPI is trying to subdue critique of what it is publishing, but this remains purely hypothetical while we wait for more information.

  3. A better system might be:

    0 insufficient information to determine why the paper was retracted.
    1 sufficient information
    2. Sufficient information and “doing the right thing”

    Where “doing the right thing” follows the criterion used here on RW.
    There is no accusation of “doing the wrong thing”, which might stimulate litigation, but by simply noting this, there is a clear stigma to all other retractions and, therefore, some pressure to actually provide full and complete information.

    1. I don’t think we can expect an indexing service to decide if the “right thing” was done in any particular case.
      What does that even mean? That all the authors requested the retraction themselves, regardless of the reason? or that the journal published the retraction with an explanation?

      But any indexer could easily see if the retraction notice gave any reasons for the retraction and if these reasons included any form of misconduct [code language=”red”][/code] or not [code language=”green”][/code].

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.