The number of so-called “predatory” open-access journals that allegedly sidestep publishing standards in order to make money off of article processing charges has dramatically expanded in recent years, and three-quarters of authors are based in either Asia or Africa, according to a new analysis from BMC Medicine.*
The number of articles published by predatory journals spiked from 53,000 in 2010 to around 420,000 in 2014, appearing in 8,000 active journals. By comparison, some 1.4-2 million papers are indexed in PubMed and similar vetted databases every year.
These types of papers have become a major problem, according to Jeffrey Beall, a librarian at the University of Colorado Denver who studies the phenomenon:
Predatory publishers and journals continue to be a serious threat to the scholarly communication ecosystem.
Lately, most predatory journals are published by smaller publishers, which maintain between 10 and 99 titles. The average APC was $178 USD, and most were published within 2-3 months after being submitted.
Most predatory publishing is confined to a few areas, the authors note:
Despite a total number of journals and publishing volumes comparable to respectable (indexed by the Directory of Open Access Journals) open access journals, the problem of predatory open access seems highly contained to just a few countries, where the academic evaluation practices strongly favor international publication, but without further quality checks.
The authors defined predatory publishers according to a long list of criteria established by Jeffrey Beall, who maintains a “Beall’s list” of “potential, possible, or probable” predatory publishers:
After an initial scan of all predatory publishers and journals included in the so-called Beall’s list, a sample of 613 journals was constructed using a stratified sampling method from the total of over 11,000 journals identified.
But based on their system of sampling, it’s hard to know whether what they selected is truly representative of these types of journals.
It would have taken a lot of effort to manually collect publication volumes and other data for all 11,873 journals, so the only practical solution was to make a sample of journals to generalize from…a fully random sample would probably have resulted in an underestimation of the total number of articles, since journals from the large publishers with large journal portfolios would have dominated the picture and very few journals from single journal publishers would have been included in the sample. Instead we chose a stratified multistage sampling based on the size of the publishers by first splitting the publishers into four size strata (100+ journals, 10–99 journals, 2–9 journals and single-journal) and then randomly sampling publishers within each of these strata.
Although their reasoning makes a bit of sense, it would have been nice to compare their analysis to one from a fully random sample of the journals. The authors themselves acknowledge the issues with sampling in their “limitations” section:
Due to the complexity of our sampling method, our results should be treated only as rough estimates showing the overall magnitude of predatory publishing and its central aspects.
Comparing the findings to a fully randomized sample would be nice, but difficult, corresponding author Cenyu Shen at the Hanken School of Economics in Finland told us:
Given our assumption that publishers of different sizes (in journal portfolios) have quite different average journal sizes, a fully random sample might have provided less reliable results. Also we would not have gotten the more detailed analysis we now have both for general and each stratified publisher group. If we had done a fully random sample in parallel to compare the overall results, this would have lead to a couple of more man-months of very tedious manual data collection.
Within the sample the authors chose, they looked particularly closely at where the publishers and authors were based. Here’s what they found for publishers:
The distribution is highly skewed, with 27 % publishing in India. A total of 52 publishers quote addresses in several countries, for instance, often a combination of the USA or a Western European country with a country from Africa or Asia. In order to establish how credible a USA/European address was, we took a closer look at the 3D street view of the address using Google Maps. If the result was a location that was not credible or, for instance, a PO Box, we classified the journal according to the alternative address.
And here’s for authors:
Figure 8 describes the regional distribution of the 262 sampled corresponding authors, which is highly skewed to Asia and Africa. Around 35 % of authors are from India, followed by Nigerian authors (8 %) and US authors (6 %).
We asked Shen if she was concerned these results might bias some readers against authors from these countries. She responded:
Our results show that compared with other geographic regions, Asia countries have a relative higher percentage of ‘predatory’ publishers. This doesn’t imply that all of journals from that region and papers published in them are absolutely ‘predatory’. To judge a journal, from our perspective, the emphasis should be put more on the quality of papers of that journal rather than where it is operating.
Although the growth in predatory journals has created a new market, it’s still a small fraction of what’s generated from traditional publishing, the authors note:
Using our data for the number of articles and average APC for 2014, our estimate for the size of the market is 74 million USD. The corresponding figure for OA journals from reputable journals has been estimated at 244 million USD in 2013. The global subscription market for scholarly journals is estimated to be around 10.5 billion USD.
Still, the rise is remarkable, and the authors hold many actors accountable for it:
Unlike many writings about the phenomenon, we believe that most authors are not necessarily tricked into publishing in predatory journals; they probably submit to them well aware of the circumstances and take a calculated risk that experts who evaluate their publication lists will not bother to check the journal credentials in detail. Hence we do not uncritically see the authors as unknowing victims. The universities or funding agencies in a number of countries that strongly emphasize publishing in ‘international’ journals for evaluating researchers, but without monitoring the quality of the journals in question, are partly responsible for the rise of this type of publishing.
The problem is also part of a bigger picture, they add:
The phenomenon should probably, however, be seen more broadly as a global North-South dilemma where institutions in developing countries are unable to break free from the increasingly globalized and homogenized view of academic excellence based on ‘where’ and how often one publishes, instead of ‘what’ is published and whether the results are relevant to local needs. In that sense, these authors and their institutions are part of a structurally unjust global system that excludes them from publishing in ‘high quality’ journals on the one hand and confines them to publish in dubious journals on the other.
We asked Jeffrey Beall for his take on the paper, given the reliance on his criteria for predatory publishing. He acknowledged that the dramatic rise in this type of journal is troubling:
…it’s very clear that the trajectory of the number of predatory journals is skyrocketing. The data was grabbed a year ago, and I can tell you that the number of predatory journals has continued to follow this trajectory since then. Predatory publishers and journals continue to be a serious threat to the scholarly communication ecosystem. Recent growth in the number of article brokers and the increasing amount of junk science being published underscore and provide evidence of the threat.
He also disagreed with the authors’ suggestion that the growth of predatory journals will slow down:
I was struck by this sentence in the conclusion: “We found that the problems caused by predatory journals are rather limited and regional, and believe that the publishing volumes in such journals will cease growing in the near future.” This statement is unwarranted because the study did not examine all the problems that predatory journals cause. The reported data don’t support the statement….Also, their attempt to minimize the problems caused by predatory publishers as “regional” is misleading. India has 1.2 billion people, and China has 1.3 billion. There are tens of millions of researchers in this region.
Predatory journals have made the news — this year, The International Archives of Medicine was delisted from the DOAJ after it accepted a bogus study claiming chocolate had health benefits within 24 hours. In 2013, the same author behind that chocolate study, John Bohannon, tricked more than half of a sample of 300 OA journals to accept fake papers submitted under a fake name and institution. Last year, the Ottawa Citizen tricked a cardiology journal into publishing a paper with a “garbled blend of fake cardiology, Latin grammar and missing graphs,” all for the price of $1200 USD.
*Note, 8 a.m. Eastern, 10/1/15: When we originally published this post at the scheduled embargo lift time set by the publisher, the DOI for the paper was not resolving, and the paper was not available on the BMC Medicine site, so we did not include a link. The paper became available at the journal site some hours later, so we’ve added a link.
Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, and sign up on our homepage for an email every time there’s a new post. Click here to review our Comments Policy.
I haven’t been able to read the paper — the DOI doesn’t yet resolve — but I don’t see why there’s a problem with using stratified sampling instead of simple random sampling. A properly constructed stratified sample is just as validly randomised as a simple random sample, and is typically more efficient. In most cases, stratified sampling is a feature, not a bug.
I agree 100%. The notion that a random sample is better is simply not well informed statistically. A stratified random sample is better in this case.
The full reference and links are:
‘Predatory’ open access: a longitudinal study of article volumes and market characteristics
Cenyu Shen* and Bo-Christer Björk
Shen and Björk BMC Medicine (2015) 13:230
DOI 10.1186/s12916-015-0469-2
http://www.biomedcentral.com/1741-7015/13/230
http://www.biomedcentral.com/content/pdf/s12916-015-0469-2.pdf
I found the paper here: http://www.biomedcentral.com/1741-7015/13/230
Yes. It’s there now. It wasn’t when I wrote. There’s a problem (perhaps relevant to the Embargo Watch blog) that embargoes often expire and media stories appear many hours before the papers are available. It’s especially noticeable in New Zealand, because our time zone is so far from the time zone of the major publishers.
It doesn’t matter so much for this example, but it’s annoying when it happens to heavily-promoted medical research so that the story is all over the news but doctors and scientists can’t read the actual paper until maybe the next day.
Indeed: https://embargowatch.wordpress.com/2010/03/01/the-pnas-problem-when-papers-arent-available-when-the-embargo-lifts/
the problem of predatory open access seems highly contained to just a few countries, where the academic evaluation practices strongly favor international publication, but without further quality checks.
So not really a problem, more a *symptom* of a problem.
Unlike many writings about the phenomenon, we believe that most authors are not necessarily tricked into publishing in predatory journals […] Hence we do not uncritically see the authors as unknowing victims.
Exactly. The authors have a need. Mockademic journals have evolved to fill that need. They’re part of the ecosystem.
I have some queries about some statements made in this study (queries made at RW since the DOI does not resolve at PubPeer):
“Authors paid an average article processing charge of 178 USD per article for articles typically published within 2 to 3 months of submission”
1) How do you know that the authors paid the APCs? Please provide a supplementary file with that proof.
2) If such “predatory” journals are so unreliable, then how can you trust the submission, acceptance and publication dates that they have published on manuscript PDF files?
(Abstract) “comparable to respectable (indexed by the Directory of Open Access Journals) open access journals”
Your control sample seems dodgy. Take for example, Archives of Biological Sciences [1]. Reforms at DOAJ are only recent and they have not screened all journal content. How can you claim your control to be so “superior” to the “predatory” OA journals in your sample population?
(p. 2) “The spectacular success of the leading megajournal, PLOS ONE, which publishes around 30,000 articles per year, shows that authors appreciate this model.” How did you measure “success” and “author appreciation”?
(p. 2) “Predatory publishers have caused a lot of negative publicity for OA journals using APCs, partly due to the spam email that they constantly send out to researchers and partly due to a number of scandals involving intentionally faulty manuscripts that have passed their quality control.” Sites like PubPeer and Retraction Watch show that peer review has failed miserably even in some of the highest ranking traditional and OA journals, and in journals published by publishers that you and Beall do not consider to be “predatory”. What is the difference, in your opinion, between failed quality control by a Beall-listed “predatory” OA publisher and failed quality control by a traditional STM publisher (print or OA)?
(p. 2) “and journals widely read by medical practitioners [10].” Reference [10] refers to a paper in a Finnish journal, Finnish Med J. Is this read widely by Finnish medical practitioners or by international medical practitioners?
(p. 2) “This indirectly makes it more difficult for serious OA journals to attract good manuscripts and get accepted to indexes such as Web of Science.” I don’t understand the logic of this statement. The previous sentence indicates that predatory OA journals are discussed in other journals. How does that make it more difficult for “serious” OA journals to succeed? What is a “serious” OA journal anyway? There is no definition provided.
(p. 2) “Since most of the reporting in the media about predatory OA has been concerned with individual cases and there have been very few scientific studies of the topic” How true is this claim?
(p. 2) “They do demonstrate that the peer review practices are often so deficient that just about any sort of paper could be accepted for publishing without revisions in many of these journals.” How do you know that peer review was not conducted in the papers in journals listed on the Beall lists? Did you contact all the authors to verify? Your comment was stated with reference to Phil Davis and John Bohannon. I wonder what you would have to say about a Taylor and Francis OA journal that provided automatic acceptance and then punished the author (i.e., me) with a ban for challenging the decision [2]? Why is this not a predatory practice?
(p. 3) I gave up critically evaluating the manuscript when I discovered, on page 3, that Beall’s list, which is flawed in some many ways, was used as the point of departure for this study. I am hoping that someone can give a critical post-pub analysis of the statistical analysis, but for me, the introduction has several flaws. Unfortunately, the DOI does not resolve at PubPeer yet, so I added my comments here at RW.
[1] http://retractionwatch.com/category/by-journal/archives-of-biological-sciences/
[2] http://retractionwatch.com/2015/09/24/biologist-banned-by-second-publisher/
“I was really surprised reading the Shen/Björk paper. The numbers presented are way
beyond my expectations. I tend to refer to the works of Walt Crawford[1]
for facts based insight in questionable journals. His work is not based on
advanced statistical methods and stratified sampling, but on counting journal
by journal and using huge excel sheets.
Walt Crawford just posted an extensive comment about the Shen/Bjork paper -http://walt.lishost.org/2015/1….
Walts basic messages are:
“Turns out 323,491 is the total volume of articles for 3.5 years (2011 through June 30,
2014). The annual total for 2013 was 115,698; the total for the first half of 2014 was 67,647, so it’s fair to extrapolate that the 2014 annual total would be under 150,000.”
“That’s a huge difference: not only is the article’s active-journal total more than twice
as high as my own (non-extrapolated, based on a full survey) number, the article total is nearly three times as high. That shouldn’t be surprising: the article is based on extrapolations from a small number of journals in an extremely heterogeneous universe, and all the statistical formulae in the world don’t make that level of extrapolation reliable.”
Not being an expert in statistical methods, I would add that I find it quite bold to base
the regional distribution of authors on 262 corresponding authors. That said, there is
no doubt that authors from developing countries in general have obstacles being
published in Western journals (discrimination, bias? – ”they are dealing with
“international” issues, but rather “regional, local” issues) and at the same
time they are under severe pressure to publish (publish or perish), thus being
published on “so-called “international journals” is extremely important for them, and often accompanied not only by promotion but as well cash!
I still think that black listing isn´t the right approach and I believe that we in the
DOAJ have the right set-up to marginalize the questionable publishers. We are
in the process of handling the expected 9.000+ re-applications form journals
listed before March 2014, where much stronger and much more detailed inclusion
criteria. This is a huge job.
And OA-journals are launched in numbers. Since March 2014 we have processed 6,000 applications, of which 2,700 have been rejected, 1,800 are in process, 1,500
have been accepted. In the same period 700 journals have been removed from
DOAJ.
We are indeed moving closer to have the clean white list. It is only a question of
resources available. We are getting more and more support from the community,
but more is needed and what John Bohannon wrote earlier when commenting on what DOAJ is doing, is spot on:
“[I]t’s a huge and important task that they’re undertaking with a tiny staff and very
little funding! We all owe them several million Euros to do the job right.”
– http://retractionwatch.com/201… – so go here to support us: https://doaj.org/supportDoaj.
I do hope that the recently launched THINKCHECKSUBMIT.org campaign will be helpful in marginalizing the questionable publishers – DOAJ is one of the founding
organizations behind this campaign.
Lars
Bjørnshauge
Managing
Director
DOAJ
[1] some links to Walt Crawfords works on this:
http://citesandinsights.info/c… – Ethics and Access 1: The Sad Case of
Jeffrey Beall
http://citesandinsights.info/c… – Ethics and Access 2: The So-Called Sting
http://citesandinsights.info/c… – Journals, “Journals” and Wannabes:
Investigating The List
http://citesandinsights.info/c… – Journals and “Journals”: Taking a Deeper Look”
All of Lars’ links are truncated and broken.
That’s unfortunate. Here are the full links:
Ethics and Access 1: The Sad Case of Jeffrey Beall http://citesandinsights.info/civ14i4.pdf
Ethics and Access 2:The So-Called Sting http://citesandinsights.info/civ14i5.pdf
Journals, “Journals” and Wannabe http://citesandinsights.info/civ14i7.pdf
Journals and “Journals”: Taking a Deeper Look http://citesandinsights.info/civ14i10.pdf
Wow, that was a great comment by Lars and a smack-down assessment by Walt. Lars gave great insight into the numbers at DOAJ while Walt told us point blank what was wrong with this paper. I think it’s time to contact BMC Medicine with a bit more of a formal request to look into this. My concerns above were fairly cosmetic, and only in the introduction. But Crawford’s insight is a critical assessment of the heart of the matter. I always say that the introduction serves as the window into the actual paper. I should applaud DOAJ for actually providing a response at RW. This is excellent. It means that DOAJ is interactive with the public on a discussion platform about issues that are central to science. I wish editors, editors-in-chief and publisher representatives would also step forward to address issues publicly, as this is the only way to advance science, and heal its problems.
Since Lars Bjørnshauge and Jaime A. Teixeira da Silva have been kind enough to point to my work, I won’t repeat what I said there, but will add a couple of notes.
To build good science, you need a good foundation. To study OA, I’ve turned to sources trying to identify serious OA publishing–specifically, the Directory of Open Access Journals.
The authors say “It would have taken a lot of effort to manually collect publication volumes”–and I can attest that it did, although I didn’t have to deal with 11,000 journals and “journals”; I only had to check 9,219 of the journals included on Beall’s list. I scare-quote “journals” because more than half of that total either didn’t exist, had never published any articles at all, weren’t OA or had published a tiny number of articles (no more than three per year).
Once was enough. I’ve completed and published a full survey of serious OA journals for 2011-2014 (that is, journals in DOAJ); links to the free excerpted version and the full paperback and ebook versions appear at the end of this comment.
“Predatory” is one of those terms that could be discussed endlessly; I don’t use the term. There are certainly questionable journals, including some from subscription publishers; there are also journals that meet the needs of their countries’ researchers but may cause Beall alarm.
As for the relatively low number of “predatory” journals in Latin America: SciELO and Redalyc do such good work in those areas that there’s probably not much room left for questionable publishers. There may, of course, be other reasons as well.
Free excerpted version of The Gold OA Landscape 2011-2013: http://citesandinsights.info/civ15i9on.pdf
Full paperback version: http://www.lulu.com/content/paperback-book/the-gold-oa-landscape-2011-2014/17264390
Full PDF site-licensed ebook: http://www.lulu.com/shop/walt-crawford/the-gold-oa-landscape-2011-2014/ebook/product-22353903.html
One comment on the Crawford blog caught my eye: “Shen and Björk ignored my work.”
Prof. Björk was very kind to contact me today. However, I have sad news for the scientific community who wanted to see some clarity and detailed responses to concerns and queries posted above.
Björk states: “Our research has been carefully done using standard scientific techniques and has been peer reviewed by three substance editors and a statistical editor. We have no wish to engage in a possibly heated discussion within the OA community, particularly around the controversial subject of Beall’s list. Others are free to comment on our article and publish alternative results, we have explained our methods and reasoning quite carefully in the article itself and leave it there.”
I have encouraged Prof. Björk to respond to these criticisms, and to engage with the public, rather than stepping away from it.
Please feel free to comment at PubPeer where the DOI is now resolving:
https://pubpeer.com/publications/B2368F4090454E701E665926665436