Observers describe the quantity of research information now produced variously as “torrent,” “overload,” “proliferation,” or the like. Technological advances in computing and telecommunication have helped us keep up, to an extent. But, I would argue, scholarly and journalistic ethics have not kept pace.
As a case in point, consider the journal article literature review. Its function is twofold: to specify where new information fits within the context of what is already known; and to avoid unknowingly duplicating research projects the public has already paid for. Paradoxically, however, information proliferation may discourage honest and accurate literature reviews. Research information accumulates, which increases the time required for conducting a thorough literature review, which increases the incentive to avoid it.
Most dismissive reviews that I have encountered are raw declarations. A scholar, pundit, or journalist simply declares that no research on a topic exists (or couldn’t be any good if it did exist). No mention is made of how or where (or, even if) they searched. Certain themes appear over and over, such as:
- “Few empirical studies exist …”
- “… a better research base is needed …”
- “That assumption is supported by only a very few studies.”
- “There is no well-developed body of evidence …”
- “We know very little about …”
- “This is the first study to …”
- “Our paper represents the first systematic attempt …”
- “Because of the small sample size and the paucity of research in this topic …”
- “… the evidence is limited and mixed …”
- “Few studies have rigorously addressed this question.”
- “The debate consists primarily of opinion and speculation …”
- “There is little good evidence … the research reported here fills this gap.”
- “Unfortunately, there has been surprisingly little research on …”
- “We know little so far about how these systems work.”
The root of the problem: Many editors do not review literature reviews for accuracy. As a result, an author can write anything about earlier work on a topic — including misrepresentations of the work of rivals.
One finds these unsupported declarations in top-ranked journals with the most exacting standards for the quality of a new article’s analysis. These journals may apply rigorous scrutiny to each new piece of research, while passing without checking literature reviews dismissing a world’s and centuries’ worth of past research.
Occasionally, a scholar might weakly qualify a dismissive review with a statement like “to my knowledge no research exists” on a topic. Such a statement implies that it is the responsibility of the previous research to make itself known to the scholar. This is the equivalent of “I didn’t look for it, but I would have known if it was there.”
In my own decades-long experience trying to master the literature on a single topic — the effect of testing on student achievement — I have reluctantly come to realize such is impossible, at least for a single individual with a limited budget. I seem to keep finding more studies, even century-old studies, with no end in sight. I’ve found thousands thus far. I was aware of a few relevant studies before I started searching in earnest; but they represent a miniscule proportion of the total of a research literature that many scholars have declared nonexistent.
When dismissive reviewers band together, they form a “citation cartel” (and practice what is variously called “citation stacking,” “citation amnesia,” or the like). They may cite each other profusely while declaring the work of others outside their group nonexistent or no good.
Dismissive reviews carry several advantages over engaging the wider research literature. A scholar:
- saves much time and avoids the tedium of reading the research literature.
- adds to his or her citation totals, or those of a citation cartel, while not adding to rivals’.
- gives readers no help in finding rival evidence (by not even citing it).
- establishes (false) bona fides as an “expert” on the topic (as experts are expected to know the research literature).
- attracts more attention by allegedly being “first,” “original,” “a pioneer.”
- increases the likelihood of press coverage for the same reason.
- increases prospects for research grant funding to “fill knowledge gaps.”
These benefits accrue to individual scholars or groups of scholars. Meanwhile, the costs of dismissive reviews accrue to society as a whole:
- Many people, including other scholars, journalists and policymakers believe them, and discontinue their efforts to look for the dismissed information.
- The dismissed information is not considered in policy discussions.
- Foundations and governments pay again for work that has already been done.
- Public policies are skewed.
- Genuine expertise is supplanted by false expertise and celebrity.
A scholar best serves the common good through the deliberate accumulation of knowledge, and by taking the time to conduct an honest, thorough literature review. As Isaac Newton so famously admitted, “If I have seen further, it is by standing on the shoulders of giants.”
But neither professional responsibility nor moral satisfaction will win one tenure in the current climate of citation metrics.
To give the problem some context, and a benchmark for comparison, a single dismissive review is likely to cause much more harm than a single case of plagiarism. A plagiarist misrepresents his or her own work and that of one other scholar (the one copied); the dismissive reviewer may misrepresent the work of hundreds or thousands.
You can get a feel for the scale of the problem yourself with some internet searching. Try exact phrases, such as “this is the first study,” “little research,” “few studies” and various similar word combinations.
In most cases, dismissive reviews do not portend the actual deletion of past research. The internet cascades a superabundance of information second by second. Libraries and computer storage facilities archive much of it. If one really wishes to find a particular piece of information, a persistent search should eventually unveil it. But that takes time and requires expertise in the field — a familiarity with its labels, categories, and vocabulary—and an expertise in searching — an expertise that seems to be dying out as internet search engines make searching seem effortless.
Internet search engines rank their results not in order of importance or quality but, rather, in order of popularity. That popularity can be purchased via search engine optimization efforts. Those with the resources to achieve first-page rankings for their research may also have the resources to bury rivals’ research in the back pages, where few ever look.
Editors should not continue to ignore the problem — an ethical problem. If they require literature reviews in manuscripts, they should check them for accuracy. If they lack the resources to scrutinize a literature review, they should not publish it. They can publish that part of the manuscript, they can check and leave the rest. As for the dismissive reviewers and citation cartels, well, I’d be interested to know what others think we should do about them.
Richard P. Phelps is founder and editor of the Nonpartisan Education Review.
Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on Twitter, like us on Facebook, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].
Great article – it’s certainly a difficult problem to tackle and probably endemic in all fields. I find the signal to noise ratio when reading reviews very low – occasionally there is real insight from someone who knows the field, but often they’re just derivative lists of references seemingly written for the sake of it…
As for authors failing to cite work that they should, it’s a bit like listening for the dog that didn’t bark. I wonder if an algorithm could examine the distribution of citation counts in the works cited. To me an ideal review should synthesise the core knowledge of a field and bring in ideas from the roads not (yet) taken. As a proxy measure, a review which cites only very highly cited papers is unlikely to be offering much new insight, and conversely a review which cites only lowly cited work may be biased or selective.
Perhaps journals could also cut down on the expectation that experimental papers review also include literature reviews in the introduction and discussion segments. In my experience this is the most likely place to find selective/dismissive citation or dubious interpretations of past work which ‘set the scene’ for the reported experiments… a happy result of this shift would be more space available for authors to describe their methods.
how else are we supposed as researchers to contextualise and make the case for our paper without discussing all the literature we know of and explicitly pointing out the area in which we feel it is deficient and improved by our own work?
Richard does not seem to be discouraging you from writing a review as you describe. But the critical point is “all the literature we know”. If one of your arguments from the “literature we know” is that you are filling a hole within the literature it behooves you to point out the effort you made in insuring that hole was not filled previously. It could be a few sentences in a paper or a footnote even with internet search strings. And reviewers shouldn’t just accept that there does not exist a study base on the authors say so. I’m sure that, in your own research, you’ve found instances where people claimed there was no prior work on something when you knew that there was.
There is an ongoing trend to avoid truly specializing on anything, because a true specialist (this includes having read and understood all relevant literature – a mammoth task nowaday given the sheer amount of garbage that goes around under the name of scientific article) entertains little hope that her/his field will continue to be considered in future visionary, gigantic, and usually misguided funding plans. Better adopt sort of an holistic attitude by which every researcher may seem to hold some expertise in every possible research funding venue. True expertise is quickly dying: why should it be cultivated, when a “respectable” research record can more easily be garnered by the above listed means?
There was a tipping point when reviews gradually became self-serving monologues – it happened about the same time citation impact rose in popularity. Journals enhanced their “citation impact score” by increasing the number of reviews per issue and they used this to sell copy. Investigators learned that a healthy splattering of review articles could increase their own citation profile in three complementary ways. It increased their personal score (H-index and the like) based on the review article itself, it allowed copious self-citation (another career-enhancing strategy), and it allowed them to write the competition (and often the pioneers of the field) out of history. What is the reward for taking the time to thoroughly and critically review the literature and write a fair and balanced review article? Oh, there isn’t one. Scientific publishing needs a complete overhaul of its incentivisation strategy.
Maybe this is the cynic in me, but I have never read literature reviews as anything other than opinion pieces. Even those of us who strive to seek balance and representation when writing reviews will find it impossible to avoid making highly subjective additions and omissions. But yes, there is a point where reviews become embarrassing expressions of self-admiration, in part due to the reasons Tom mentioned.
This highlights an interesting double-standard in science. A researcher who misreports the results of an experiment is said to have committed fraud and is frequently sanctioned, but I have never heard of a researcher misreporting the literature and facing any consequences.
I think the problem is summarised well by Tom Curran and Adede. Scientists have families to feed and want to be successful in their careers. There needs to be motivation to write accurate literature reviews. The current system does not provide that motivation and there is no sanction. Until this is looked upon as unprofessional and called out as such, it will inevitably continue.
I think that that is unduly cynical. For one thing, many scientists view their articles like works of art and want them to be of as high standard as possible; the self-satisfaction is the motivation. But in any case, whether or not the referees are critical enough, scientists working in a field can recognise a thorough or inciteful review from an inadequate or biased one. The good reviews are thus going to be the ones cited most often, so that is some objective incentive to write them.
Seems that “literature review” itself is a specialty apart from research and should have its own college. What is the framework of the knowledge and how does this paper fit in is probably equally or more important as the paper itself. This is a database problem that needs Github-type contribution.
This article and discussion seem to be talking about two different sorts of ‘review’. To be sure, a full review article should be thorough and as unbiased as possible (though I share Michael B’s cynicism), but it is usually in the “literature review” introduction of experimental papers that you find the dismissive review comments that the article critiques. I think the fundamental motivation for these is simply that editors and reviewers demand that studies are novel, and this itself is actually the problem. I don’t think it is undesirable for society to “pay again for work that has already been done” – this is independent replication, and is sorely lacking in most fields at the moment.
Excellent point: we should value independent replication more. Dismissive reviews are problematic in this regard because they declare the previous research nonexistent, thus negating the possible benefit from comparing the current study with previous studies. If replication were valued more highly, however, perhaps there would be more incentive to find and cite the previous research.
Just found this in the J. of Pediatric Dermatology authors guide:
https://onlinelibrary.wiley.com/page/journal/15251470/homepage/forauthors.html
” Firstness: Claims of being the first case report of its kind should be avoided unless a detailed search methodology is included. Please include rationale for claim and search methodology in the letter to the editor. ”
… it can be done.