As Retraction Watch recently reported, three of the top 10 philosophy journals in the highly influential Scopus database turned out to be fakes: Not only did these dubious journals manage to infiltrate the list, but they also rose to its top by trading citations. This news is embarrassing in itself, but it is hardly shocking. Our rankings-obsessed academic culture has proven time and again that it is prone to data manipulation. Rankings for both publications and institutions are routinely hacked by scholars, editors, and administrators who are ready to tweak or even falsify numbers as needed.
The problems with the Scopus journal rankings, however, run much deeper. The issue is not that inflated citation numbers have occasionally propelled impostor journals to the top of the list. Rather, at least in my own field of literary studies, the ranking makes no sense whatsoever: the list is full of journals that have no business being there at all because they belong to entirely different areas of scholarly enquiry, and even when the ranking gets the field right, it systematically places marginal publications close to the top. In what follows, I briefly break down the major ways the Scopus Literature and Literary Theory Ranking is not just skewed but downright nonsensical.
Scopus ranks journals based on SJR (SCImago Journal Rank), which it defines as a “measure of journal’s impact, influence or prestige.” The ranking includes a little over 1,000 titles, but for the purposes of this analysis I have focused mostly on the top 100. My main finding: the majority of journals in the upper part of the list quite simply do not belong to the fields of literature and literary theory.
The top 10 are particularly telling. Within that group is exactly one (!) journal – an annual publication dedicated to the work of the Spanish Golden Age dramatist Lope de Vega – that specializes in literature. Another two are wide-ranging humanities journals in which literary scholars do publish, although these also are highly idiosyncratic. The first is South Atlantic Quarterly, a long-standing journal dedicated to “urgent political, cultural, and social questions,” whose editorial board consists primarily of literary theorists, and the second is Journal of Cultural Analytics, a relatively new open-access publication “dedicated to the computational study of culture.”
The rest of the top 10 belong to translation studies (two), criminology (one), and writing studies (two); one is an interdisciplinary journal with a primary focus on sociology and gender studies, and one (Poetics) has been founded as a literary journal in the 1970s and continues to address issues related to literature (among other topics), but it does so from a distinctly sociological perspective. The editors are sociologists, who also make up the bulk of the contributors.
In short, Scopus’ list of top 10 literature journals includes one journal specializing in literature, two general humanities journals with some footing in literary studies, one journal that used to specialize in literature but no longer does, and six journals with absolutely no relation to literary studies.
RANK | JOURNAL TITLE | ACTUAL FIELD |
1 | Translation Spaces | Translation Studies |
2 | Criminology and Public Policy | Criminology |
3 | Anuario Lope de Vega | Literature |
4 | Journal of Writing Research | Writing studies |
5 | Men and Masculinities | Sociology |
6 | Perspectives: Studies in Translation Theory and Practice | Translation Studies |
7 | Poetics | Sociology of Culture |
8 | Writing Communication | Writing Studies |
9 | South Atlantic Quarterly | Interdisciplinary Humanities |
10 | Journal of Cultural Analytics | Digital Humanities |
Table 1. Top 10 journals in the Scopus Literature and Literary Theory Ranking
The situation becomes only slightly better when we analyze the top 100. In that cohort the number of journals belonging to the fields of literature and literary theory rises to somewhere between 35 and 45, depending on how one counts. However, even with that improvement, well over half of the titles on the list do not belong there. They belong to a range of different fields, including primarily language and linguistics, but also education, library science, anthropology, history, theology, and so on.
What accounts for this failure of most basic classification? One might assume that the creators of the list have simply lumped literature and some adjacent fields together: Although it is deeply frustrating to scholars who share little in terms of publication venues, research methodologies, and departmental affiliations, the use of joint categories that encompass fields like literature, linguistics, and writing is not unprecedented.
But Scopus seems to be doing something different. The database has a separate category for Language and Linguistics, and although some journals may publish in both fields (none of the remotely good ones do), many of those included in the Literature category are in fact pure linguistics journals, full stop. Besides, even if overlapping classifications can account for some of these intruders, one cannot justify the presence of journals like Performance Measurement and Metrics or Men and Masculinities on the list. No methodological choice, however dubious, can explain this ranking. One has to assume that those who created it simply couldn’t be bothered to properly attend to the task at hand.
To make things worse, even when we remove the noise created by the inclusion of journals from other fields and focus only on actual literature journals, the ranking’s ability to identify quality publications does not seem to improve. A look at the top 100 journals reveals that publications with no discernable international footprint systematically outrank highly selective world-leading venues. To use just one startling example, Malaysian pay-to-play publication 3L: Language, Linguistics, Literature (40th overall) far outranks almost every major literature journal published in the US and Europe, including such cutting-edge publications as PMLA (59th), English Literary History (110th), and Diacritics (415th).
Many other interesting quirks would require further scrutiny, such as the curious overrepresentation of journals published in Slovenian, a language with barely 2 million speakers, as well as the equally curious overrepresentation of journals focusing on the Spanish Golden Age, an important period in early modern literary history, but surely not one that could plausibly attract more high-quality scholarship than the entire fields of German and French literary studies, Comparative Literature, and Medieval Studies, all of which are completely absent from the top 100. And yet, here we are.
The Scopus ranking under the rubric of “Literature and Literary Theory” is many things but a meaningful ranking of journals in the field of literary studies, it is not. It contains scores of journals that have absolutely nothing to do with that particular field, and it routinely awards very high rankings to publications with no international relevance whatsoever. On what criteria, it is impossible to say. Unsurprisingly, most of the publications that are ranked highly in Scopus are not good enough to be included in Clarivate’s much more selective Arts & Humanities Citation Index. Some have made it into the less selective Emerging Sources Citation Index, but most are not picked up by Web of Science at all.
I wish I could offer some more constructive criticism, but such criticism would imply that the Scopus ranking, such as it is, has some redeeming features, which it does not. How do you improve a list that ranks Criminology and Public Policy as one of the top literature journals? That would be like taking seriously a list that treats Shakespeare Quarterly as a top journal in the field of chemical engineering. It would probably be easier to just start from scratch.
Of course, the biggest problem with Scopus is that, despite its profound unseriousness (at least in my field), it is generally taken seriously and produces real-world consequences. A tenure case at a major US research university is unlikely to hinge on a journal’s standing in Scopus, but in many parts of the world, especially in developing countries, administrators routinely rely on Scopus rankings as a proxy measure of research quality. And yet, in its existing form the ranking is not only an utterly inadequate tool for achieving that goal but may well be counterproductive given the sheer number of marginal journals on which it confers the veneer of respectability.
I don’t know what the situation in the sciences is, but in the humanities, Scopus must get its act together. Until then, its rankings will not be a serious indicator of journal quality.
Aleksandar Stević is assistant professor of English at Lingnan University in Hong Kong.
Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on Twitter, like us on Facebook, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].
This is a narrow view of literature as a study of literary works (not texts, which might include cultural artefacts and phenomena). If one does, however, see cultural studies as a part of literature then, say, criminology and public policy are a part of it.
In this case, why are the major sociology and anthropology journals not in the category?
It is not good even for sciences.
But why even try to categorize journals based on a discipline?
Personally, despite all the problems, I think Google Scholar still captures the wider “impact” the best. In your field, for instance, it might record references in theses, popular magazines, professional literature outlets, museum pieces, and whatnot. In many cases such references convey more “impact” than references in scholarly journals. Though, gaming and other bad things are still present also with Google Scholar.
And, of course, it is not good that any single company controls the citation business. Therefore, I am quite pleased with the progress that Semantic Scholar has recently made.
When I thought about this mine field recently, I came to a conclusion that the best available option would be if something like Crossref would record the citations. They already have the meta-data, after all. It would be also good for science in the long-run if there would be a non-profit in charge.
Crossref recently revealed that they had been data-manipulated themselves because nonexistent citations can be listed in papers submitted.
Thank you very much for the valuable information regarding Scopus publication.But suggest some list of journals under Economics to publish research articles.It is a must for the candidates to publish two research papers before they submit their Ph.D.thesis.Send your immediate response.
Postmortem of published Scopus journals research might surely prove worse
The post focuses on the misplaced listing of journals by field, at great length. But that not the major use of Scopus at least in social sciences. My own journal and many others receive a score. It fluctuates yearly, but it does have some importance since it is a quantitative analysis of citations. Potential authors checking where to publish do look at these scores, but by contrast I doubt the listing of a field allocated to that journal really has much importance. Our journal is multidisciplinary, it appears in three categories/fields, but none of them adequately describe what the journal does or where the authorship comes from. Therefore the score of the journal has importance, but the listing by field of study really doesn’t. One solution might be that Scopus stops publishing such listings. Remember that the only credible alternative, Web of Science, is elitist in its choice of journals to be members, is not open to the public, and often produces wildly different results based on a narrower citation analysis.
The problem is it shows a more fundamental error in how Scopus classifies journals. Many of its bibliometric functions relay on correct subject classification, such as field-weighting in some of the metrics.
If journals are incorrectly classified into incorrect subject areas, then how can it accurately calculate field-weighted results? Furthermore, if a journal is shown in the incorrect subject area, then it may skew results for citations etc. Its an concern that should be looked at more thoroughly by Elsevier.
I have retired from serving as a reference librarian at a major university but when I was looking at Scopus’ ridiculous subject groupings some years ago, I was stunned to discover that the Journal of Otolaryngology was classified as a social science journal. There were many other anomalous category choices. I never got a straight answer from any of the SCOPUS folks at the ALA Exhibits. In one of my many attempts to dislodge this journal from the social science category I asked if they were running a grocery store would they put cat food in the meat department because it contained meat. I received a shrug.
I want the scholarly community to realize that, if they choose to review for independent budding Publishers with good online platform, most of the hassles that scholars face in trying to publish their works will be a thing of the past. Let us see Scopus as one of the options instead of the current approach to Scopus which is stifling development of rapid digital publishing of research. The logic is that, even if it means requesting for a fee, accept to review for budding Publishers except you don’t have the expertise. Once renowned processors are sitting on journals as editors, the monopoly which drives the cost publishing high and also make scholars scramble for few Journals thought to be Scopus indexed will be over. Scholars should just come up with criteria to identify a journal that work for as editors and then we are good to go.
Is rank equivalent to real impact?
To be honest, I have rarely read an article on Scopus that was as unhelpful as this one. As I have shown in my recent book Bibliometry from a Global Perspective, Scopus is far better at reproducing, for example, the enormous contributions of the political science Skytte Prize winners than comparable databases such as the Web of Knowledge. I can only advise readers to ignore this uninformed article
I would be very grateful if you could point me toward what exactly is uninformed about the article. Is the article wrong in claiming that the list in literary studies includes scores of journals that have absolutely nothing to do with that field? Or do you perhaps think that is not a problem?
Can you indicate what was uninformed in this article?
I looked up your book, published by Nova Science Publishers, which can be considered a vanity press at best. If anyone is uninformed it’s you.
I can only wonder how someone can use the anonymity of the internet to spread allegations that damage the reputation of an academic publisher. I would invite Mr Lucas to read my relevant articles in journals such as the Journal of Scholarly Publishing, the Journal of Globalisation Studies and “Bibliotheksdienst”. There is a very broad discussion today about measuring the reputation of an academic publisher based on the presence of a publisher’s works in international libraries, citations in journals and the use of a publisher’s products in „open syllabi“. As an author who has published with leading think tanks and journals, among them the Jerusalem Center for Public Affairs and the Jewish Political Studies Review, I don’t need to be told by an anonymous Mr Lucas that I have published with a vanity press.
I might suggest that you’re defending Scopus from an accusation that isn’t being made here. Does your work concentrate on the contents of the subject categories, which is the aim here, or is it concerned with the calculation of impact factor? Because Scopus could logically be both the best system for impact factor and badly organised in relation to categories.
Hopefully the academic work you’re so keen to publicise here is a bit more measured than your responses here!
With all respect, this is very narrow minded evaluation. I mean the title of the article simply saying “Scopus is broken” !!! Well, it is not broken. Aleksandar just have a narrow mindset on how to view the ranking and categorization of Non-STEM journals.
The SJR ranking is much more complex than what is described in the article. And Scopus evaluation, does not solely rely on SJR as described; that is misleading and false accusation. I would recommend for the Aleksandar and the readers to take more time to look on how Scopus evaluates, index and continuously exercise vetting journals. It does not look to me there is black box or secrets anymore on how publishers decide which titles are indexed in major databases. I think we need to read more and investigate more before we take a crumb from that slice of cheese and decide it is Wrotten !!! It might be just Blue Cheese.
I think it would be more helpful if you could point out some specific inaccuracy in the article. Is it not true that the literature list includes a large number of titles unrelated to that field? Is it not the case that marginal publications are highly ranked on that list? If we should not rely solely on SJR, can you provide an example of where Scopus offers some more accurate ranking of literature journals?
I think part of the problem is that they code things too quickly. I was helping a physics professor get stat on a physics journal that was inadvertently coded as physical therapy (or maybe it was the other way around or something like that). I can see that someone started typing phys… and picked the wrong category. When I pointed it out to them, they fixed it relatively quickly. But they do need to conduct audits regularly.