How well do databases flag retracted articles?
There has been a lot of interest recently in the quality of retraction notices and notifications, including new guidelines from the National Information Standards Organization (NISO; our Ivan Oransky was a member of the committee) and a new study which Ivan and our Alison Abritis joined.
In another new paper, “Identification of Retracted Publications and Completeness of Retraction Notices in Public Health,” a group of researchers set out to study “how clearly and consistently retracted publications in public health are being presented to researchers.”
Spoiler alert: Not very.
We asked corresponding author Caitlin Bakker, of the University of Regina — who also chaired the NISO committee — some questions about the findings and their implications.
You looked at 441 retractions in the public health literature sourced from The Retraction Database in 2023. How often were these marked as retracted in various places researchers might look for the status of such papers?
We searched for each of the 441 publications in 11 resources, including PubMed, Web of Science, Scopus, and other databases. We were able to find over 2800 records for these retracted publications in these different resources, of which less than 50% identified that the publication had been retracted. We also found that a publication being marked as retracted in one database didn’t mean it would be marked as retracted in other databases. Less than 5% of the publications were marked as retracted in every resource through which they were available.
What are the implications of your findings?
As a librarian, one thing I care about very deeply is information literacy, which is a set of skills that enables a person to know when they have a knowledge gap and they need information, to be able to find that information, to access it, and to use it effectively to meet their needs and create new knowledge. For many years, I’ve taught students about the importance of using trusted, scholarly content, like peer reviewed articles and scholarly databases, and how to use those resources effectively to find the best available evidence to address their questions. I’ve always tried to reinforce to students why it’s important to think critically about where they’re finding information and who is producing and publishing that material, not only so that they can address a specific research question, but so they can be information literate.
But, all of those activities and skills rely on accurate information being provided through those resources, and unmarked retractions pose significant challenges to information literacy. If the fact that the retraction occurred isn’t discoverable, that’s an incredibly important piece of context that a reader doesn’t have when deciding if and how they want to use the publication. A reader who is unaware that an article has been retracted would be more likely to use that article in their research and practice, and perpetuate its misinformation.
What changes would you recommend for how authors check the status of papers they’re planning to cite? What changes should publishers consider making?
There are tools that are available that can assist authors. Some libraries, including my own, have access to BrowZine and LibKey, which alert readers to an article’s retracted status directly from the library’s search interface, so that authors know an article’s been retracted before they see the full text. Several citation managers, including EndNote and Zotero, also have functionality that will flag retracted articles, so if you happen to save a paper and it’s later retracted, you’ll still be alerted before you cite it.
However, despite these tools, there’s no one foolproof strategy authors can use that will always alert them to a retraction. Authors instead need to use a variety of strategies and tools, which can be a challenge, given that these are additional steps that need to be taken in the research and publication process.
In my opinion, there needs to be greater transparency and consistency with retraction-related metadata, both from the publishers and also through vendors such as aggregators. In our research, we found that even within a single database, there could be quite a lot of variety with regards to where and how an item was marked as retracted. On one paper, it may be in the title, while another it would be in the abstract, and another it would be in a completely different field. From an outsider’s perspective, it would appear that some databases may not have consistent policies about how, when and where they mark articles as retracted.
Retraction notices often refer to COPE guidelines, and publishers have at least tacitly agreed to uphold them by joining COPE. But you found that fewer than half of the retraction notices in the study met COPE requirements, and even fewer met proposed Retraction Watch criteria. Does that surprise you? And what, if anything, should be done about it?
I was disappointed but not surprised by the number of notices that didn’t meet requirements. What I found particularly unfortunate was that no requirement was met by all notices. Even requirements that would appear to be relatively easy to meet, such as not paywalling the notice and identifying it as a retraction, still had a level of non-compliance. It’s difficult to say what should be done without knowing the reasons for non-compliance. It would be wonderful to hear from publishers what the barriers are to creating more robust notices. There have been some proposals, such as standardized forms for retractions that would include all necessary fields, that could be very beneficial in creating more transparent, informative notices.
You note that the study only looked at a single point in time. Based on your previous work in this subject, however, have you seen changes in behavior and practice?
I began working in this area about eight years ago, and in some ways when purely looking at the problem from a search and discovery perspective, I wouldn’t say that I’ve seen meaningful improvement. However, what I have seen changing is the level of interest in tackling the issue. I’ve been especially pleased to see the enthusiasm for solutions that would cross the entire scholarly publishing industry, but also that bring in expertise from people in libraries, repositories, researchers and other related fields. This problem is not localized to any one database or discipline, and so the solutions can’t be localized to any one publisher or industry.
Like Retraction Watch? You can make a tax-deductible contribution to support our work, subscribe to our free daily digest or paid weekly update, follow us on Twitter, like us on Facebook, or add us to your RSS reader. If you find a retraction that’s not in The Retraction Watch Database, you can let us know here. For comments or feedback, email us at [email protected].
Last sentence “This problem is related . . . ” shouldn’t that be “is not”?
Agree with you, “is not” should be there …
I had the same thought.
Fixed, thanks.
Have you ever considered looking into the reliability of those who call for retractions? This isn’t a one way street, we should be very careful about assuming those calling for retractions are solely acting for altruistic reasons.
As a former editor of a scholarly journal, it crystal clear what the problem is. First, promotion and salaries are based on your publication metrics. Hence, you are being judged on two numbers: authorship and citation frequency. As a consequence, plagiarism ,fabrication , self-citation and ghost authorship all occur . Editors and reviewers are the gatekeepers. If they fail, the system fails. You can’t rely on the honesty and integrity of authors.