It’s happened again: Journal “cannot rule out” possibility author did his own peer review

ijkcThomson Reuters’ online peer review system ScholarOne is having quite a year.

This summer, a scientist exploited basic security flaws in how the system accepts author suggestions for peer reviewers to review a whole pile of his own manuscripts, ultimately resulting in the retraction of 60 papers and the resignation of the Taiwan minister of education.

Now, another journal that uses the system, Wiley’s International Journal of Chemical Kinetics, has retracted a paper because the authors provided their own peer reviewers and “the identity of the peer reviewers could subsequently not be verified.”

We asked editor Craig A. Taatjes if he was concerned the authors had conducted their own peer review. His response is reflective of many of the breaches we’ve seen so far for these online systems:

We cannot rule out that possibility.

Here’s the notice for “Acid-Catalyzed Aquation of Ni(II)-Hydrazone Complexes: Kinetics and Solvent Effect” (paywalled), a paper originally published in August:

This article has been retracted by agreement between the journal editors and Wiley Periodicals, Inc. as the identity of the peer reviewers could subsequently not be verified.

We got more details from Taatjes:

The manuscript was accepted for publication based on two referee reports from peer reviewers. After the article was published we found that the email addresses used for the peer reviewers did not correspond to addresses normally used by these individuals as contact information in their publications.  When I contacted the reviewers, through their institutional, publicly available email addresses, it became clear that neither of them had reviewed, nor had knowledge of, the manuscript. The article was therefore retracted due to the fact that the identity of the peer reviewers could not be verified.

The corresponding author was not able to explain the source of the incorrect referee contact information he had provided to the editorial office. We believe that this is a single episode and have found no evidence, after scrutinizing our records, that other cases of a similar nature have occurred in the journal. In response to this incident we have reinforced our procedures for independent verification of reviewer contact information.

Various publishers have had to retract more than 100 papers by a range of authors for similar reasons.

Hat tip: Stuart Cantrill

22 thoughts on “It’s happened again: Journal “cannot rule out” possibility author did his own peer review”

  1. the email addresses used for the peer reviewers did not correspond to addresses normally used by these individuals as contact information in their publications. When I contacted the reviewers, through their institutional, publicly available email addresses, it became clear that neither of them had reviewed, nor had knowledge of, the manuscript.

    Is there any reason why the editors rely on authors to supply the peer reviewers’ e-addresses, rather than look up those “institutional, publicly available” addresses themselves? Other than laziness?

    1. Stupidity? (One would hope not.) Cupidity? (The mechanism is not evident.) Carelessness? (Only the first time this happens.)

      1. In my experience of journals which allow a contributor to suggest possible reviewers, the website demands that the contributor also provide each reviewer’s e-address. As if the work of Googling it personally might cause the Editor to break out in a sweat.

    1. Yes, and when by some similar mechanism we learn of that bad wire consequently used in your daughter’s implanted defibrillator, we’ll revise that term to merely “amusing.”

  2. Is there any reason why the editors rely on authors to supply the peer reviewers’ e-addresses, rather than look up those “institutional, publicly available” addresses themselves? Other than laziness?

    One might surmise that this is precisely a result of monolithically automating manuscript tracking. This used to be a task that would fall to the “Managing Editor” (or the “Associate” version, depending on volume). It’s not foolproof, but depending on the tenure of the EIC, one might reasonably be expected to become familiar with the usual suspects.

  3. This could be one of the most important topics of 2014 for the simple reason that it now injects serious and credible doubt about the peer review and manuscript management of ALL journals, whether Elsevier, Springer, Taylor and Francis, Wiley, DeGruyter, Cambridge University Press, Oxford University Press, and other publishers, that use ScholarOne and also another popular online submission system, editorialmanager (Arial Systems) [1]. Not to mention the more-than-suspect online submission systems and lack of peer review of the “predatory” OA journals. In most of these cases, at least in the plant sciences, authors are required to input the names and details of anything from between 1 and 5 “peers”. On some occasions, the screen indicates that not all “peers” might be vetted, and that the selection, or use, might depend on the discretion of the EIC. The problem is, in my belief (and not from experience), that it must be extremely simple to invent the names and identities of “peers”, especially for low or mid-level IF journals, let’s say from 0.1 to 5. This is because, most likely, higher level journals will apparently exercise a more rigorous peer review, either requesting more peers, and more details, such as their position, exact departmental address, or even a web-page to prove that such individuals exist. Higher level journals will also likely toss away individuals they consider to be under-qualified, or simply unknown to them. The selection process of peers is thus inherently flawed, and potentially biased because it is suggested, n many cases, by the authors, but then screened by, in many cases, at most one individual, the editor-in-chief or a similarly ranked individual. And those who take advantage of the weakness of the system know that the editors are likely too busy to verify the facts (i.e., classical laziness). If the Bohannon sting taught us anything, it was that the traditional system of selection, verification and vetting of peers is flawed. And thus dangerous. And in some cases, highly corrupted. And even though we may be referring to “reputable” publishers, and even though the risks may be eliminated as the levels of checks increase (what I have previously referred to as the increasing militarization of science [2]), the risk is not zero. However, imagine that the risk was 1%, or even as low as 0.01%. Can you still imagine how many tens or hundreds of thousands of papers have most likely passed “peer” review with unqualified, under-qualified or maybe, as in other cases, false or selfed peer reviewers?

    I think this is the can of worms that the publishers don’t want us to touch and even though it may appear – superficially – that something is being done, for every case resolved, I sense that a 10- or 100-fold expansion of misconduct or failure of the system is taking place. At some point, the system will fail, and collapse. They prefer to villify the author/scientist base and keep them in check rather than have themselves becoming the focus of attention of fraud and corruption in science publishing. Even if the publisher or editor is not to blame, the fact that they sit in silence, or avoid trying to discuss this issue, is as worrisome, if not more, than the fraudulent scientists who conduct their own peer review under a disguised name or identity (I am not insinuating that this is the case for these authors, rather my comment is in general). It is time to demand the public release of ALL peer reviewer reports of all papers ever published. It is time to also hold the publishers FULLY accountable for what they have published. I am sure that Thomson Reuters is also making good profit off this product, so let this corporation also be held partially accountable for creating a product that is potentially flawed (or simplified to meet the required demands) because it has failed to secure the veracity of the peers. When will the scientific community wake up and see that indeed, on one hand, we have a seriously bad set of corrupted apples in the basket, the scientists who commit misconduct, but on the other hand, the basket itself may also be rotten, in some respects, to the core?

    [1] http://www.editorialmanager.com/homepage/home.htm
    [2] http://retractionwatch.com/2014/03/11/so-what-happened-after-paul-brookes-was-forced-to-shut-down-science-fraud-org/#comments (see thread under Hans Muller’s comments)

    1. It is time to demand the public release of ALL peer reviewer reports of all papers ever published.

      Retroactively??? Acting as a peer reviewer, with the expectation of anonymity, I have sometimes phrased things (in parts of the report where I had the further expectation that they would not be sent unredacted to the author [rarely “authors”]) in more or less (and sometimes totally) self-revealing ways. I don’t think that it’s likely any of the authors I’ve reviewed would come after me if they knew I was the reviewer, but it’s possible (and there was one “peer” reviewer of a paper of mine that I have occasionally wanted to Do Bad Things To). I really think retroactively annulling guarantees of anonymity would cause more problems than it could possibly solve.

  4. I can imagine that, among hundreds of reviews that are requested by a journal in a year, one such falsified e-mail might slip through. I mean, we cannot expect an editor to literally know every researcher in the field and his/her affiliation, and then there is the occasional human error on a bad busy day.
    However, what I cannot fathom is that an editor would rely *exclusively* on the reviewers suggested by authors. If such suggestions are made, as an editor I would normally be inclined to pick one from the suggested list, but then add one or two of my own. I mean, even for bonafide suggestions, these are likely to be “positively inclined” reviewers. So at least some independent reviewers should be included. Had that been done, then there would not have been much of an issue here. Moreover, I would definitely read the suggested reviewer’s comments and see if it seems overly positive (which would make alarm bells ring, and likely lead me to invite another opinion). In my experience, a suggested minor revision (or acceptance) on first submission would be reason to label the reviewer as “rather uncritical” and have a closer look at that superb paper myself.
    In summary, even if part of the blame goes to the software, then the primary blame still goes to the deficient and grossly negligent editorial practice that is displayed here, IMHO.

    (BTW: I had the opportunity to look behind the scenes of the Elsevier system as a guest editor; there, a fairly complete database is maintained of previous reviewers, so you could simply lookup names and the system would know the e-mail etc. I never bothered to add additional reviewers myself, but would have been very aware that the validity of the data needs to be checked.)

  5. Dave, unless you were specifically under a contract to not reveal any Elsevier secrets, or information, please do share as much detail as possible about the Elsevier system after signing in as an editor, or reviewer. It is important for us to understand how robotized the system has become and how many names and identities on that data-base are true, and how many are not. Most of the developed world works with free-mail like gmail, yahoo, hotmail, aol, rediffmail, etc. so to demand institutional addresses is simply not realistic, even though I have encountered some online submission systems that reject peers based on their e-mail ending, even if they are veridic scientists! Can, for example, specific editors and reviewers (peers) on that specialist data-base be traced back to individal manuscripts? If so, then this could be extremely valuable information. And should such documents be leaked because Elsevier is certainly not going to tell us about the weaknesses of its system? Something like Elsevier-Leaks, Wiley-Leaks, or Springer-Leaks? I regularly hear of scientists who have been victims of so many fake or unprofessional peer reports. In some cases, the entire report is not bad, but there are aspects of lack of professionalism displayed by some “peers”, who use bad or insulting language, or just plain incredibly incorrect or unscientific suggestions. These instances can make an author’s blood boil. So, frankly speaking, if peers are getting credit for reviewing for Publisher X, Y or Z’s journal A, B or C, and assigning this proudly on their CVs, then we, the public and the scientific academic community have the right to full open access of all peer reviewers’ comments. Whether we are talking about Wiley, or not.

    1. I am under no contract that I am aware of. I do not know of any “secrets”. Overall, I did not encounter anything peculiar, or “weaknesses” other than human ones. It was a helpful system in my experience.
      Regarding being able to track reviewers to specific manuscripts. Yes, that was possible, but only within the specific journal. That allows one to see whether a reviewer isn’t already reviewing another paper currently, for instance. It also listed some generic data like how many days on average a registered reviewer takes to return a review. However, it does not allow you to access other manuscripts besides the ones you are assigned to, so there is no “unlimited access” granted. Also, given that they cancelled my editorial access after my guest-editorship was over suggests that they take security seriously. I can’t help you supplying amunition against Elsevier because I do not have any complaints. (And I am not writing this because I generally like big publishers or have a financial relationship with Elsevier, which I don’t; quite the contrary, but their online peer review system is not to blame for their faults.)
      Regarding unprofessional peer reports. The submission system is only a tool. The editor uses the tool. In the end, the editors are responsible for the quality of the review process. That is why above I stated that the editor was most negligent, not the software. That would be similar to blaming MS-Word for allowing you to write an ugly letter.
      Finally, Elsevier keeps reviewers anonymous, and I value that. Personally I do not review unless anonymously, although I am fine with other people having different opinions on that. But I am certainly not going to promote reviewers’ identities or comments being published.
      I think that answers your query.

      BTW, talking about Elsevier, and being Dutch myself, you might be pleased to read this: http://daniellakens.blogspot.nl/2014/11/negotiations-between-elsevier-and-dutch.html

  6. I am sure that Thomson Reuters is also making good profit off this product

    Perhaps; I can remember at least one outfit that rolled their own before this became a separate marketing niche. I am decidedly amused by one bit in the ad blurb for the manuscript-tracking product:

    “ScholarOne Manuscripts is the premier journal and peer review tool for scholarly publishers and societies. It deftly balances a journal’s requirements to holistically aggregate content with an author’s inherent interest in getting published work out quickly to a professional audience.”

    This is not language aimed at the people who are actually going to be using the product. SK has recently produced what I daresay is a another naively revealing entry on the subject.

    It’s possible to be a cog in the production wheel and self-consistently maintain that one is at least contributing in a small but meaningful way to the permanent literature. This takes hard work for, essentially, clerical laborers, but a few cases of “Holy smoke, I can’t believe I didn’t catch that” are valuable – if genuinely intangible – rewards.

    Projective fantasies of “knowing what [the] ‘scientists’ want,” on the other hand, are always a bad sign from what I’ve seen, and weirdly common in (small N) middle management. The best publications managers that I’ve known were in the business of poking and prodding to figure out what a society’s authors wanted, not what they ought to want.

  7. I wish to elaborate a bit more on why I believe the retroactive release of all peer coments is necessary. And I don’t see what the fear or the problem is (as expressed by some RW commentators). The identities of the reviewers can easily be maintained anonymous, and the publisher can refer to them as peer 1, 2 and 3, if it wishes. But the scientific and academic public has the right to know what was written in the peer reports that led to the acceptance of those manuscripts. And if there is a serious problem, then the identity should be revealed. Because the peer reviewer must then be held responsible for what he/she has written, advised, suggested, and ultimately recommended. As equally as the author is held responsible for his/her paper, as equally as a PI must be held responsible for the sloppy student, as equally as an editor must be held responsible for the content of a journal under his/her watch, and as equally as a publisher must be held responsible for ALL the processes between submission and acceptance. A bad, sloppy or irresponsible (aka unprofessional) peer reviewer does not deserve to stay hidden and disguised in the comfort of the shadows of silence. These individuals must also be held accountable, too, because they form part of the central chain between submission and acceptance. A paper does not exist without the approval and “blessing” of the peers and editors. Those who snort at my idea seem to ignore these really basic and fundamental aspects of peers’ / editors’ functions and responsibilities.

    I refer primarily to the journals who have directly benefitted from “good” papers to boost their impact factors and game the system because there is a strong possibility that papers (currently an unquantifiable number) got through that shouldn’t have, either because the EIC or editors may have been biased (positively favorable), or because the peers may have been too lenient, or because, as in an increasing number of cases, because the peers were ghosts, or were either vetted by the authors for being chums, or fictitous characters created to feign real-life characters. Or, as we are learning, because the publishers made-to-believe fail-safe system was not that safe. What the retroactive release of such documents allows is for the unbiased analysis of the peer review process by the wider global peer community. This allows cracks in the methodology to be identified that perhaps had not been identified during “peer” review, unfair or incorrect comments by so-called peers, etc. It is the peer reports that may be the missing clue to understanding why (many) retractions have taken place, I believe.

    The risk of bias is even higher when the peer review is not blind or even double-blind, because, a priori, upon opening up the manuscript, the identity of the authors would be known. I know this idea puts fear in the hearts of many – even mine – because at some point in our careers, most likely such cases existed, either in favor of our papers, or against them. If F1000Research can do it, then so can every other journal and publisher. And who should foot the bill? The publishers, of course, simply because they have derived benefit, financial and other (e.g., gaining intellectual property rights, or copyrights), from FREE labour of peers and most often editors, too. So, their free-labor, exploratory system that is enshrouded in mystery and opacity must be exposed. By hook, or by crook. Of course, the scientific community must see the advantage of this, and must act in unison and one voice, otherwise no pressure can be applied. Or the pressure that is applied will have no visible effect. A painful present that looks back at the errors of the past is the only way in which we are going to be able to move forward in any trust-worthy and conciliatory way. I advocate openly for the public release of all past, present and future peer reviewer reports. So, Elsevier, Springer, Taylor and Francis, Wiley, DeGruyter, and all of the OA publishers listed on Beall’s list (honest and not)… are you up to it?

    1. Let me refine my earlier statement above. I do not object to my reviews being made public because I stand behind all of them, as long as my identity is not. Personal preference.
      However, retrospectively opening up reviews that were made confidentially isn’t fair in my view. The reviews were not written for a public; they were written for editors and authors. Opening them up prospectively I would not mind, provided the reviewer know that will/can happen.
      However, one important aspect that seems to be overlooked above is that the reviewers DO NOT decide whether a paper is accepted or rejected. That is the editor’s job! Reviewers merely advise the editor about the science. The above seems to transfer some of the responsibility of the decision onto reviewers (by holding them partly accountable if a bad paper is accepted, or by judging them “too lenient”). That is not a good development in my view. I would not mind if an editor would for instance write an aggregate summary of the peer review process, pointing out how many reviewers were consulted, what their major concerns were, how they were addressed, in how many revisions, etc. Including how the editor arrived at his/her decision based on that history. Then the responsibility is put where it belongs. Although that would mean a lot more work for editors, documenting it all.
      Revealing identities publicly in cases where reviewers “failed” is off limits in my view, unless it is pointed out in advance to prospective reviewers that that may happen.
      (Personally, I would never again review anything if a faulty paper from someone else would be blamed on me; sorry for being selfish, but I have nothing to gain and all to lose then.)
      Another option would be to hire reviewers, that is pay them. Then you can demand quality for pay. In the present system, a constructive suggestion is what one can expect from volunteers. You seem to allude to payment, and I agree the publishers have been able to exploit the review-and-edit-for-free tradition for too long.

  8. In the old days, the editors (at all levels) applied a preliminary filter before they sent a paper out to the reviewers. These days anything and everything that gets submitted is sent out for review. Please correct me if I’m wrong but from where I sit I see associate editorships are just currency axchanged between dominant figures in each field.

    1. That is not at all my impression. I find most papers that are sent to review (to me, at least) are fairly suitable generally. And I don’t consider myself the most lenient reviewer. I remember one recent exception, where the editor was new on the job; can happen.
      (If a paper is so bad that an editor should not ever have sent it out for review, then I would be inclined to send it back straightaway with precisely that comment to the editor, and likely an intention not to review for that journal again for a while.)
      But it may well depend on the journal, publisher, or field.

  9. In response to Dave, the beauty of RW and other fair blogs is that it allows free and spirited discussion. I totally respect your position and opinion, as equally as I hope you will respect mine. First of all, great link to the Daniel Lakens blog. This is truly breaking news and I hope that RW can focus on this story on its next weekend reads to raise awareness that Elsevier and its parent company are slowly being brought to their knees (finally). Ultimately, with every copyright form that we sign over to Elsevier, we stay poor, because we are not paid royalties fr our intellect, but ultimately, our intellect is sold for record profits [1, 2[, all to ultimately satisfy one set of individuals, the shareholders [3].

    It is time for universities to stand up against the corporate exploratory model of Reed-Elsevier. I must admit, recent events have made me rather critical of several aspects related to Elsevier policies, despite my great praise of several aspects of this publisher. So, the fact that universities are finally waking up and holding Elsevier up against the wall for its truly predatory pricing policies is about time. I have advocated before that if all universities were to simply freeze academics for one year and boycott the ridiculous access fees imposed by Elsevier, Springer, Taylor and Francis/Routledge, Wiley and others, that they would be in a very powerful negotiating position. To date, the greatest problem has been the lack of institutional coordination. Can you imagine if the main markets for these predatory pricing policies were boycotted (and frozen for one year, for exmaple) by the US, Canada, the UK, Australia, Brazil, South Africa, China, Japan, South Korea, EU and a few other major academic markets, in conjunction and perhaps led by the Dutch initiative. You know what would happen? Either these publishers would be forced to shut down their publishing operations for good, or bend under pressure to make all content open access as they feel the pressure of their balls being squeezed.

    So this initiative to try and force Elsevier to make ALL of its content open access would actually benefit the post-publication peer review process incredibly, because it would allow the entire literature on Scopus (all 12.5+ million papers) to suddenly be opened up to the global academic community for positive, constructive use, and also for deep and critical analysis. I firmly suppor the Dutch initiative and so should any PPPR proponent who believes in setting the academic record straight and getting the scientific literature cleaned up.

    Finally, Dave, I was in no way suggesting that any illegal or unethical action take place in the online reviewer / editor section, but the very fact that you indicate that any editor and reviewer can openly access the reviews of any other editor or reviewer for any paper on that data-base indicates that there is – unlike what you claim – a serious (lack of) security problem. I think Elsevier needs to address this aspect immediately if indeed what you are claiming is true. This is because not all editors or reviewers might be as honest as you.

    [1] http://www.reedelsevier.com/investorcentre/reports%202007/Pages/2013.aspx
    [2] http://www.reedelsevier.com/investorcentre/reports%202007/Documents/2013/reed_elsevier_ar_2013.pdf
    [3] http://www.reedelsevier.com/investorcentre/shareholderinformation/Pages/Dividendhistory.aspx

    1. “any editor and reviewer can openly access the reviews of any other editor or reviewer for any paper on that data-base”

      Did I write that? It is a bit subtler than that. Reviewers cannot access any other reviewer’s comments (unless copied into a decision letter that the editor sends out to all) or any other paper he wasn’t involved in. Editors cannot access an overview of the “peer review record” of another paper one isn’t assigned to. One cannot access anything in the database related to other journals at all.
      However, one can lookup an overview of a reviewer’s review history within a journal, similar like a reviewer can lookup an overview of his own past work for the journal.
      If an editor would like to look up the reviews related to a particular paper he wasn’t assigned, for instance, he would have to guess who might have been a reviewer, look up that presumed reviewer’s history, check whether the paper was indeed assigned to that reviewer, and -if so- then look into that review history. That is: the search system is reviewer-based, not paper-based.
      You could argue that this is unsafe, but an editor must have some privileges. He must be able to assess the track record of a reviewer in order to somewhat judge for instance how critical/reliable/responsive/… that reviewer is. I would say that is a good thing. It would also help prevent self-review scandals like the above. And yes, privileges are open to abuse. But if you shut everything down, an editor cannot work anymore either. In the end, any system is based on trust (even PPPR). Given that editors are not anonymous, I believe the system is reasonable.

      BTW: I’m doing this from memory; don’t have access anymore, so can’t look up the details, nor the current practice.

      PS: Not sure where my previous discussion was “spirited” or might have seemed to lack respect. Wasn’t intended as such.

    2. JATdS, for a more “inside” look, you might be interested to try the links provided on this page:
      http://editorsupdate.elsevier.com/issue-45-november-2014/welcome-issue-45-editors-update/
      A publicly accessible page, so I hereby share. Of course, this type of newsletter is always a bit self-advertising, but still it provides some insight in developments regarding peer review, PPPR efforts, and plagiarism detection going on at this particular publisher. I hope you find it informative.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.