Fake peer reviews: They’re all the rage.
Sixteen papers are being retracted across three Elsevier journals after the publisher discovered that one of the authors, Khalid Zaman, orchestrated fake peer reviews by submitting false contact information for his suggested reviewers.
This particular kind of scam has been haunting online peer review for a few years now, as loyal Retraction Watch readers know. This one is a classic of the genre: According to Elsevier’s director of publishing services, Catriona Fennell, an editor first became suspicious after noticing that Zaman’s suggested reviewers, all with non-institutional addresses, were unusually kind to the economist’s work.
Elsevier has actually hired a full-time staff member with a PhD in physics and history as a managing editor to do the grunt work on cases like this. Flags were first raised in August, at which point the ethics watchdog went to town digging through all of Zaman’s other publications looking for suspicious reviews coming from non-institutional addresses provided by the scientist, an economist at COMSATS Information Technology Center in Abbottabad, Pakistan.
Here’s the main notice:
This article has been retracted at the request of the Editor and the Publisher.
After a thorough investigation, the Publisher has concluded that the Editor was misled into accepting this article based upon the positive advice of at least one faked reviewer report. The report was submitted from a fictitious email account which was provided to the Editor by the corresponding author during the submission of the article. The corresponding author, Dr Zaman, wishes to admit sole responsibility and to state that his co-authors were not aware of his actions.
This manipulation of the peer-review process represents a clear violation of the fundamentals of peer review, our publishing policies, and publishing ethics standards. Apologies are offered to the reviewers whose identities were assumed and to the readers of the journal that this deception was not detected during the submission process.
The papers appear in Economic Modelling, Renewable Energy, and Renewable and Sustainable Energy Reviews. One paper will also have the additional line thanking a co-author, Jamshed Uppal, for raising his own alarm.
We got in touch with Fennell, who told us what Elsevier is doing in the wake of these scams, and what editors and authors should do:
In the last couple of years, unfortunately, editors have learned to be more skeptical of reviewer details provided by authors, especially contact details not connected to institutions. Traditionally in this whole field there has been a sense that you work on the basis of trust. The starting point is, the vast majority of people are honest.
It’s a bit like plagiarism detection software – you might be looking for one in a thousand, but it comes clear you need to start looking for it…About a year or two ago it became clear these cases are not isolated….If someone recommends a reviewer, we suggest [editors] verify the email address against SCOPUS.
…
Our message for editors is, be alert but not alarmed. This is a small minority but we do need to start watching out for it.
…
From our perspective, there are two areas that we consider ourselves to have very strong responsibilities in.
First, we provide our editors with the best support, tools, and guidelines…editors may never have come across this before, it’s very rare, so we can provide them with really good support.
Second, we really feel we have a role in educating authors…the entire community has a vested interest in upholding ethics…We hold 200 workshops at universities every year to talk to authors about how to get your papers published. There are always ethics talks, and those are very popular. We come across a lot of confusion, especially with younger researchers…Yes there are clear cut cases, but there are also a lot of grey areas.
We consider ourselves to have an important role in prevention. We try to put a positive tone to our education material, so it’s not a draconian “we will catch you” – it’s also about the importance of research integrity for science, the perception of science with taxpayers…there are a lot of rewards for doing this the right way.
None of the papers has been cited more than seven times, according to Thomson Scientific’s Web of Knowledge, although one of the Renewable Energy studies earned a “highly cited” designation from the database.
We’ve reached out to Zaman and Uppal, and will update with anything we learn.
Here’s a complete list of the papers to be retracted:
Economic Modelling
- The relationship between foreign direct investment and pro-poor growth policies in Pakistan: The new interface
- The relationship between financial indicators and human development in Pakistan
- The relationship between agricultural technologies and carbon emissions in Pakistan: Peril and promise
- Distributional effects of rising food prices in Pakistan: Evidence from HIES 2001–02 and 2005–06 survey
- Effect of oil prices on trade balance: New insights into the cointegration relationship from Pakistan
- Exchange rate pass-through in to inflation: New insights in to the cointegration relationship from Pakistan
- The consequences of revenue gap in Pakistan: Unveiling the reality
- The relationship between growth–inequality–poverty triangle and pro-poor growth policies in Pakistan: The twin disappointments
- The relationship between growth and poverty in forecasting framework: Pakistan’s future in the year 2035
- The relationship between audit committees, compensation incentives and corporate audit fees in Pakistan
- Foreign exchange risk in a managed float regime: A case study of Pakistani rupee
- Impact of foreign political instability on Chinese exports
Renewable Energy
- Modeling the causal relationship between energy and growth factors: Journey towards sustainable development
- Questing the three key growth determinants: Energy consumption, foreign direct investment and financial development in South Asia
Renewable and Sustainable Energy Reviews
An Editor should never consider a paper or a reviewer from a private email !
I can’t agree. There are lots of legitimate reasons why one might migrate away from one’s institutional address, including unreliable service, storage limits, and privacy concerns (especially but not only in corporate situations). It wouldn’t surprise me if within a very few years institutions start throwing up their hands, zeroing out the institutional-email budget, and telling staff and students to just use gmail or whatever they like. At that point, policies such as you suggest will be simply untenable. I absolutely agree that noninstitutional addresses should face an extra step of verification … but given the work involved on my part in reviewing a paper, having a staffer google me, locate my “faculty” web page, and check that somebody gave them my matching email doesn’t seem overly onerous.
…like institutions are immune from fraud!!!!
So no one outside academia or industry (or retired from either) can possibly have an idea worthy of scientific publication?
Fennell states: “Our message for editors is, be alert but not alarmed. This is a small minority but we do need to start watching out for it.” So, with how much confidence can Fennell state that the 12.5 million papers published in Elsevier journals on sciencedirect.com were all truly peer reviewed? This case fortifies the notion that peer review as well as the ELS online submission system at Elsevier is broken. A “small minority” to scientists could be translated as 1% or 5%. 1% or 5% of 12.5 million is 125000 and 625000 papers, respectively.
Ms. Fennell, I am alarmed. More alarmed with Elsevier than with Zaman. Because Zaman basically was able to beat (until now) the world’s most sophisticated and largest science publisher. That in itself is astonishing.
In my opinion, based on quite a few stories related to Elsevier, a deep distrust of Elsevier and its editorial integrity, has now set in. And, to cover all 12.5 million papers, Elsevier has hired ONE “full-time staff member with a PhD in physics and history as a managing editor to do the grunt work on cases like this”? Something is really, really wrong with this picture.
You know what this means: the paper/review writing industry will find a way to create fake institutional e-mail addresses.
This is a rather extreme example of manipulating the peer review system. I suspect a more common gaming of the peer review system is suggesting reviewers who are cherry picked from the broader community of researchers. For example, suggested reviewers may be people who the author knows will agree with the conclusions in the manuscript (regardless of data/methods) and may be frequent collaborators with the author.
Maybe it is time to start asking referees to accredit themselves, especially if a non-institutional e-mail address is offered. ORCID researcher ID is a safe option to prevent peer-review fraud. Some journals are already using it, e.g. ScienceOpen
How do you figure? I mean, institutions can pay ORCID for the privilege of populating ORCID’s database (nice work if you can get it), but I don’t see anything stopping random individuals from creating records at will.
Exactly, ORCID is created by the same characters who created the online systems that have now been duped. Scientists, don’t get duped by these new marketing strategies. OUR work is now in serious danger because we are now competing with cheats who have been able to beat the system. And our integrity is in danger because this is the same system that claims ethical superiority to a simple submission by e-mail attachment. Online submission systems are nothing more than e-mail and scientist agglomerators that serve the sole interest of the publishers. And the holes in the system are now gaping wide open. The system has backfired really badly. I think this is only the tip of the ice-berg of what we are about to see and learn (if it gets leaked to us).
I don’t know that I agree with JATdS’s extension of the complaints that are ready to hand about online submission portals, but I don’t exactly look to publishers as a model of technological competence.
Consider the Crossref policy document (PDF) on name ownership and control. Why are DOIs assigned to publishers? Because it never occurred to anybody that journals might change hands. (Or the publishers Just Didn’t Understand.) As a consequence, DOIs aren’t actually unique;* the system had to be kludged to allow an “alias” point to a “prime.” Different journals with the same title? Not Our Problem! Your problem! You fix!
I may have already mentioned this, but I’ve seen one case of an editorial department blithely dictate that journals were to be abbreviated according to the Crossref title list (which happens to have a search function that I suppose might not be completely crippled were its syntax actually documented anywhere) rather than one of the long-established systems. Why? Pattern matching! Automagical linking of the references!
The upshot was that a third subsection on references was added to the editorial instructions for when to ignore the Crossref title list, on behalf of the EO.
The problem is that the list, like the basic DOI notion, is an uncurated mess. It only works if you work with us! See the fee schedule! (Nonetheless, there exists a
a document not just on the proper styling of a DOI, but “why” – which betrays a wholesale failure to have a grasp on how what is came to be, despite imagining that something that doesn’t work now is a reason to reject something that does (HTTP) because it “may no longer be the dominant protocol and browsers like Internet Explorer will handle DOIs natively.”
As there’s no boldface to be had, I recommend letting the quoted text sink in. One might also observe that while the document has been labeled “obsolete,” precisely nothing has changed under the hood.
Anyway, ORCID has a similar** “white paper,” exemplifying, inter alia, that the question whether to use hyphens or “allow” link shorteners was subject to the strictest design scrutiny.
Many years ago, I did a stint in the charismatic phase of AI. One very strange phenomenon that I recall was that visitors would appear from the realm of library science who were strongly under the impression that one of the disciplines must be a subset of the other, usually as representatives of the superset. I really can’t help but get the feeling that the fundamental mindset underlying the decision-making phenotype of Important Publishing Types hasn’t much changed after all these years.
* In short, a laborious form of DHCP.
** In that it appears to have just been slapped together over time, at least.
Despite the length of my foregoing reply, I still failed to reiterate that ORCID only “works” if enough institutions are willing to pay ORCID to make ORCID work.
One might idly imagine a Ph.D. based upon trying to nail down the sociological boundary between this and “multilevel marketing.”
I am sure ORCID has weaknesses, there are always corporate interests in such cases. It is still much more difficult to fake than a gmail address of a fake referee, which the editor has to work with. Are there better alternative researcher databases, which offer unique and near-unfakable identifications? Until there are, editors should use ORCID.
Leonid, how does relying upon ORCID represent an improvement (for this purpose) over simply requiring an institutional E-mail address, with all the concomitant problems?
Zaman has had a paper retracted earlier this year in Scientometrics:
dx.doi.org/10.1007/s11192-014-1305-8
This was due to plagiarism.
He has not updated his researchgate profile, however, so the paper is still listed in his profile without any indication it has been retracted.
In more than 7 years of journal editing, our journal *never* asked authors who should referee their papers. We used the very extensive knowledge of our Editorial Board instead. This both prevented fake reviews and occasionally uncovered pre-publication self-plagiarism. To me this epidemic of dishonesty looks like opportunism meeting deskilling at the top.
Same here. Of course, it takes some time. But I simply ignore the list of reviewers suggested by the authors. The simplest criteria is to select reviewers from the papers CITED in the manuscript. 50/50% , they are either fierce enemy or knowledgeable colleagues, so in the end things are OK. Plus, use three reviewers from different continents helps too. The scope is to minimize, not to eliminate these cases. Our published provides good database with first-degree academic relationship, which helps a lot.
So you ask the authors to suggest the reviewers just to later ignore their suggestions? Why waste the authors’ time then?
Speaking of Researchgate, the portrait of Zaman — taken from his researchgate profile — has been buffed up and filtered with photoshop until it resembles some android entity from the other side of the Uncanny Valley. I’ve seen the same thing on other researchgate profiles for academics from Pakistan and neighbouring countries. Is impression management really that crucial?
Zaman’s researchgate profile indicates that he has published regularly in Quality & Quantity. Have the editors been alerted to his modus operandi?
During my career of journal publishing I was never asked to supply reviewers. How would you ever suggest a name without at least some self-doubt as to the question “do they like or hate my work?” Seems like a set-up for failure either way.
it is suggested to use double-blank review system for initial review.
Linda, I fully agree that the members of an editor board should have the prime responsibility of dealing with the peer reviewing and that if there is not enough technical skill or understanding, that external experts should then be consulted, organized by the editors, not by the authors. It is amply evident that requesting authors to suggest their own peer reviewers will open up the system to fraud, deceit and abuse. As is (and has been) evidently happening. Now the publishers are in a bit of a fix, I believe, because it appears as if they are not sure which papers were peer reviewed by valid peers. In the case of Elsevier, that s 12.5 million papers worth of doubt, in the case of Springer Science + Business Medium, +8.5 million cases of doubt, etc. I think today, my mistrust in these publishers has now been cemented. And it is their fault, despite them trying to pass the buck onto scientists. Had they not created a fallible system, then it would most likely not have been abused. The rush to publish quickly, the greed to increase the impact factor, by pumping out more and more papers that could be cited, and the increasingly strained editorial and peer pool have also led to a gradual erosion of publishing values and quality control. And amongst these cases of fraud, I believe (or rather, I hope) that most are honest peer reviews that have led to acceptance and publication of papers. But, how do we know? How can we trust the same publishers who gave us these increasingly draconian online submission systems that requested us to submit our own peer reviewers?
One of my own personal experiences, over several years, was with precisely an Elsevier journal. Many editors of Scientia Horticulturae, the world’s #1 horticultural journal, were simply mantle-pieces, giving the aura of fame associated with highly ranked names, et the true work of editing was being conducted by the “anonymous” peer reviewers which were gradually been drawn from a central data-base, which itself was being built up from the names and contacts of individuals that authors submitted to the journal as candidate peer reviewers. The EICs served, until my fervent complaints became public, in many cases, a simple conduit between the “peers” and the authors, adding in some standardized sentences with automatically generated revision, acceptance, or rejection e-mails. My battle with Elsevier, specifically that editor board, is documented here at RW [1]. I believe that there are two problems associated with this: a) peers are not compensated, so there is no motivation, even though the publishers indicate to them that it is a noble cause to assist the advancement of society and science (even though, it I simply an advancement of their profit margins), in a process that is nicely characterized by one of my favorite philosophical critics, Slavoj Zizek, as cultural capitalism [2]. We are witnessing a veritable tragedy taking lace right before our very eyes. And while I share no kind words, or thoughts, for Mr. Zaman, for having cheated his colleagues, his country, science and Elsevier, I am also critical of Elsevier for not doing enough to have avoided this situation. I continue to insist, and I lay here my warning once more, the increasingly aggressive implementation of systems like ORCID is not going to decrease the fraud, it is simply going to increase it and make it more complex as it evolves into forms that are going to evolve to meet the militarization of science.
[1] http://retractionwatch.com/2014/04/10/following-personal-attacks-and-threats-elsevier-plant-journal-makes-author-persona-non-grata/
[2] https://www.youtube.com/watch?v=hpAMbpQ8J7g
In my opinion its time that all journals move to double blind reviewing. This surely will reduce the chances of this kind of scam. Also, journals like ones in the list with high impact factor should not ask for suggested reviewers. Editor should send the articles for review to the experts in the field. Double blind review looks a good option to reduce these frauds.
There is a simple solution i.e. save all the authenticated data of reviewers including his or her original email ID in the data base and send the research articles to only those which are in the data base of a journal.
A lot of journals have a real problem with not enough reviewers. Academic scientists are busy and even if they review twice as many papers as they submit a year (and many do more), it will not be enough reviewers for the journals since there are plenty of people who don’t do their share. However journal editors are more to blame here, since not using the large pool of highly qualified industrial scientists is, shall we say an inefficient use of resources? Or is that too strong a term to describe the huge waste of people available and the subsequent whining of not having enough reviewers…
Interesting point! Do you work in the industry?
I’d like to argue that industrial researchers would have to review papers at home in their free time, no company would continue paying them for this during “office” hours. Academic scientists review papers as part of their job, and the academic work hours often stretch beyond the time covered by the salary (in Germany, PhD students are officially paid to work 4 hours a day only).
Also, where is the motivation for the industry researchers to do peer review privately, beyond possible nostalgia or idealism? They don’t care if the journal might in return look favourably on their own submissions.
No, I do not work in industry actually, but this point was brought up before on an excellent chemistry blog run by an industrial scientist called ‘In the Pipeline’ (by Derek Lowe). I’m not sure that the description of an academic job at many places in the US says ‘reviewing papers’ is part of it. In fact, I consider this work as something done outside the official working hours anyways. It’s something that has to be done if you don’t want the editors to get pissed off with you I suppose. You can’t reject too many review requests.
The motivation for industry researchers is there for those who write one paper once in a while, and also most active industrial researchers have to keep up with the current literature even if they only publish patents (or nothing). A lot of industrial researchers comment on current research as is seen regularly on ‘In the Pipeline’. Here is a recent discussion on a paper that I guess could have benefited from an industrial scientist perspective (I’m not in this field btw).
pipeline.corante.com/archives/2014/12/17/j_already_known_chem.php
This also shows the carelessness of the editors and publishers of the Journal. Could you believe that the journal “Economic Modelling” has published 3 articles of one author [Khalid Zaman] in one Issue in two occasions [Volume 29, Issue 5, September 2012 and Volume 30, January 2013] and a total of 12 articles between July 2012 – September 2013 [Volume 29, Issue 4, July 2012, Pages 1220–1227; Volume 29, Issue 5, September 2012, Pages 1515–1523; Volume 29, Issue 5, September 2012, Pages 1632–1639; Volume 29, Issue 5, September 2012, Pages 1986–1995; Volume 29, Issue 6, November 2012, Pages 2125–2143; Volume 29, Issue 6, November 2012, Pages 2205–2221; Volume 30, January 2013, Pages 281–294; Volume 30, January 2013, Pages 375–393; Volume 30, January 2013, Pages 468–491; Volume 31, March 2013, Pages 697–716; Volume 33, July 2013, Pages 802–807; Volume 35, September 2013, Pages 409–417].
This is really injustice to the other authors who have submitted their paper in the journal and waited for long to get them published. The publishers should have a policy about the maximum number of papers of an author that will be published in one volume or a year.
It is great, we appreciate the great technology of today and efforts being carried out by elsevier. We condemn this dishonesty. The journals and editorial staff must be careful about such circumstances. I also agree with the views of D. Saikia.
As I’ve already tried to establish DOIs as an example of misplaced confidence in the acumen of publisher self-assessment in terms of technological acumen, I cannot resist presenting another example that has fallen into my lap.
Exhibit 1: “Aluminium Compounds for Use in Vaccines.”
Exhibit 2a: I wonder where that link would go if I copied it.
Exhibit 2b: Well, to doi:10.1111/j.0818-9641.2004.01286.x, that’s where.
Exhibit 3: With the payload (PDF), sporting doi:10.1111/j.1440-1711.2004.01286.x.
You know what would really stink? If the latter didn’t resolve.
^ I failed to correctly close the italic tag after 2<a, sorrry.
Argh, no, my error is worse: “the payload (PDF)”
It is new type of dishonesty that you get fake publication in race of winning the count of publications in good journals. I shall request all researchers and authors to avoid from this game rather than you can publish or not. For saving your own name, your Institute name, your country name and name of research please stop this unethical practice
what abt zaman’s papers in springer, taylor and francis and emerald journals?
A very unfortunate incident. This should not have happened. Our heads are down with grief and shame. He should be punished severely and should be made example for others
Some questions arise here:
1-Why the chief editor requests article authors to provide reviewer’s details as it is a matter of common sense that “if a cat is asked to guard a cup of milk what would happen”??
2-Why have the papers been continuously sent to the reviewers suggested by the author?
3-Why has the chief editor kept his eyes closed while the addresses were non-institutional, was he in a deep sleep two years ago?
4-Why are all the authors bound to provide suggested reviewer’s details even if they do not know the reviewers concerned with a particular subject?
Its a real shame. He must be banned and black listed from all renowned publishers. plus i must suggest publishers that do take care about suggested reviewers.. and try to find reviewers by yourself. or make a pool of reviewers which are renowned and authentic .
From personal experience, when something like this happens, it usually reflects a degradation of the system. And the editors also need to take some responsibility here. As the commentator Kanwal above suggests, were the editors asleep when this took place two years ago? Verification of the authenticity of peer reviewers is not the authors’ responsibility, evidently, but that of editors and the publisher, in this case Elsevier.
Not only should the EICs provide a pubic response here as to why the “system” failed not once, but 16 times. The editor boards should also be thoroughly examined by specialists.
Economic Modelling
http://www.journals.elsevier.com/economic-modelling/editorial-board/
Renewable Energy
http://www.journals.elsevier.com/renewable-energy/editorial-board/
Renewable and Sustainable Energy Reviews
http://www.journals.elsevier.com/renewable-and-sustainable-energy-reviews/editorial-board/
The Editor should accept the suggested reviewers with official email and official contact number.
It is an unacceptable behavior and should be exemplified. I’m concerned about papers reviewed by Dr Zaman, if any, as well. The publishers should check that as well.
This is extreme example of review system of papers. I guess making struck policy will harm vast majority of people who are honest and fair in their work. However appropriate measures need to be taken.
He was let go by the university
http://ww3.comsats.edu.pk/ciitblogs/files2/DrKhalidZaman.pdf
The problem is that very often processing editors do not care about the list of suggested reviewers although their merits are obvious. The result is often sad if a processing editor sends the paper to the reviewer who is not an expert in the field. I suspect that this is the way of manipulation to reject the submitted paper.
Kindly use original email id based on their organization’s domain.
Such email id must be verified by the respective organization also. Maybe some of IT support person misusing organizational domain email id also. In this case, simply verify such email id through the organization website or HR department.