Did you know there is a black market for scientific papers? Unfortunately, there is a growing trend of authors purchasing a spot on the author list of papers-for-sale – and the better the journal, the higher the price. This worrisome trend has been on the minds of Peggy Mason at the University of Chicago and Maria Sol Bernardez Sarria of Yale University, formerly associated with the Ethics Committee of the Society for Neuroscience, which publishes the Journal of Neuroscience (Mason as Chair from 2013 to 2015, and Bernardez Sarria as assistant). In this capacity, they regularly scanned several websites and journals for ethics-related information, and developed an approach that might give away sold authorship.
In November 2013, we happened upon a Science news item entitled China’s Publication Bazaar by Mara Hvistendahl (a story also well-covered by Charles Seife in 2014 in Scientific American). Hvistendahl described a situation where people were selling authorship on manuscripts that were intended to be published in journals listed in Thomson Reuters’ Scientific Citation Index (SCI). The author explains that Chinese students and faculty felt under so much pressure to publish in a SCI journal that a black market arose for “papers-for-sale.” Apparently, agencies in China work with authors of manuscripts that are conditionally accepted (in revision) at SCI journals and academics in need of authorship on such publications. Authorship on manuscripts in revision is offered to those with the money to pay. The cost goes up with the prestige of the journal and authorship position.
We were alarmed at this new method of fraud, worried that it could infiltrate the Journal of Neuroscience. The Ethics Committee immediately decided to run a pilot study to determine if we could identify papers where authorship may have been sold. Given the modus operandi described by Hvistendahl, where authors are added after a manuscript revision, the inclusion criterion for our pilot study was manuscript resubmissions that involved author changes. We set no geographical restriction or filter; even though this fraud tactic was identified in China, it may exist in other countries.
As was true of the entire Committee, the study operated independently of the Journal of Neuroscience’s editorial process; the editorial process for manuscripts that were examined was not affected by the pilot study.
Following extensive deliberation, we came up with a list of criteria to screen in the pilot:
- A cover letter that is substantially worse in grammar, spelling and writing quality than the accompanying manuscript.
- Few shared co-authored papers between combinations of authors
- Few authored papers for individual authors
- Few to no citations of papers by individual co-authors in the manuscript’s bibliography
- An absence of previous publications by one or more co-authors in the field of the manuscript
- The same email address used for multiple authors
- Textual overlap with other papers (aka plagiarized text)
As an aside – we hope that by identifying some of these criteria, we do not inspire paper mills to start changing their tactics to avoid detection by, for example, writing cover letters. Because of such considerations, it is likely that the list of warning signs will have to be modified over time.
We also considered the host server of the authors’ email addresses. We specifically speculated that addresses at Hotmail, Yahoo, Gmail, and their foreign equivalents could be indicative of a fraudulent manuscript. However, we found that the use of non-institutional email addresses is too widespread to make it a useful criterion. We concluded that non-institutional email addresses are present in many legitimate manuscripts and institutional email addresses are present in at least some suspected fraudulent manuscripts. We were also aware that certain criteria (2-5) would be common for early career scientists or trainees. Therefore, positive identification of these criteria for a first author only, was not sufficient to raise suspicion.
Lacking investigative power, we had no way to determine the efficacy of our methods. However, we did find that virtually all of the manuscripts had few or none of the features listed whereas a few manuscripts had a large number of the features. This distribution suggests that there are at least two types of manuscripts: Those that are the collaborative work of the authors listed, and those that are the product of only a subset of the authors. We offer these ideas up to the community, as they appear not to produce false positives and may correctly detect at least some true positives.
Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our daily digest. Click here to review our Comments Policy. For a sneak peek at what we’re working on, click here.