A new analysis — which included scanning Retraction Watch posts — has identified some trends in papers pulled for fake peer review, a subject we’ve covered at length.
For those who aren’t familiar, fake reviews arise when researchers associated with the paper in question (most often authors) create email addresses for reviewers, enabling them to write their own positive reviews.
The article — released September 23 by the Postgraduate Medical Journal — found the vast majority of papers were retracted from journals with impact factors below 5, and most included co-authors based in China.
As described in the paper, “Characteristics of retractions related to faked peer reviews: an overview,” the authors searched Retraction Watch as well as various databases such as PubMed and Google Scholar, along with other media reports, and found 250 retractions for fake peer review. (Since the authors concluded their analysis, the number of retractions due to faked reviews has continued to pile up; our latest tally is now 324.)
Here are the authors’ main findings:
…a total of five publishers and 48 journals were involved in retractions related to faked peer reviews. Indeed, we must acknowledge that the number of publishers and journals was relatively small. More notably, there was a great discrepancy regarding the contribution of different journals in terms of the quantity of retractions. In particular, a low proportion of journals contributed to a large number of retractions; by comparison, a high proportion of journals contributed to a small number of retractions.
Here are some journals the authors found that have been hit harder than others (including links to our coverage):
The top 5 journals included the Journal of Vibration and Control (24.8%), Molecular Biology Reports (11.6%), Immunopharmacology and Immunotoxicology (8.0%), Tumour Biology (6.8%) and European Journal of Medical Research (6.4%). The publishers included SAGE (31%), Springer (26%), BioMed Central (18%), Elsevier (13%), Informa (11%) and [Lippincott Williams & Wilkins] LWW (1%).
The authors also found:
A majority (74.8%) of retracted papers were written by Chinese researchers. In terms of the publication year, the retracted papers were published since 2010, and the number of retracted papers peaked in 2014 (40.8%). In terms of the retraction year, the retractions started in 2012, and the number of retractions peaked in 2015 (59.6%).
(Side note: Faking reviews continues to be a problem, despite many efforts to identify and act on warning signs. Earlier this month, we reported on a 2016 paper that somehow managed to get published using a fake review.)
For more information on this phenomenon, check out our 2014 feature in Nature.
Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our daily digest. Click here to review our Comments Policy. For a sneak peek at what we’re working on, click here.
Should the # of faked reviews per country be expressed as a percentage of the country’s population of scientific researchers, or some other denominator? This would correct for the fact that China has the world’s largest population.
But, if nearly all the faked peer reviews are from China, then this calculation may not matter.
Not all retractions were from China.
Nonetheless, China does not make up 75% of the world’s population, nor 75% of the world’s research output. I would respectfully disagree with anyone who thinks that China’s problems with science ethics is just average from a global perspective.
Thank you very much for the Retraction Watch.
I understand that locating suitable peer reviewers takes time for journals, but perhaps they should mandate only reviewers with institutional email addresses, or seek out reviewers independently when the author list is Chinese (75% of the reviewer-fakers being from one country is significant).
Abstract concludes “With the improvement of the peer review mechanism and increased education about publishing ethics, such academic misconduct may gradually disappear in future.” – Though an optimistic projection, it appears that misconduct may increase in number at least in terms of plagiarism! My personal feeling but I may be wrong.
Under any circumstances, the Editors should check the publications of reviewers, no matter this is suggested by authors or not, to make sure reviewers are in the same field with the submission and have no relationship with the current authors. If the Editors seriously try to read the reviewers’ publication, how can they miss the correct contacts and even identities of reviewers? Therefore, SENDING review invitation to faked accounts mostly means the Editors have done nothing and just transfer the manuscripts to email addresses then wait for the feedback, which may suggest that they even have not seriously read the submission? I have to say that, the responsibility firstly should be taken by the journals and Editors.
It may also be unfair to consider such cases are correlated with the “author list”. Such activities of fabricating others’ identities normally were manipulated by a few person(s), and it is very possible that, if the reviewers’ identities can be faked, other author(s)’ contacts can be incorrect either. The jounals and Editors should also take the responsibility to indentify the faked authors’ email addresses too, and dissociate victim authors from such retractions.
However, in most such ridiculous issues, the journals and Editors announced the retraction, and it seems that all responsibilities go to the author? It is NOT fair. Before we care where the author list comes from, we should HIGHLIGHT where the journals and Editors locate. WHY NOT?