Nowadays, there are many ways to access a paper — on the publisher’s website, on MEDLINE, PubMed, Web of Science, Scopus, and other outlets. So when the publisher retracts a paper, do these outlets consistently mark it as such? And if they don’t, what’s the impact? Researchers Caitlin Bakker and Amy Riegelman at the University of Minnesota surveyed more than one hundred retractions in mental health research to try to get at some answers, and published their findings in the Journal of Librarianship and Scholarly Communication. We spoke to Bakker about the potential harm to patients when clinicians don’t receive consistent notifications about retracted data.
Retraction Watch: You note: “Of the 144 articles studied, only 10 were represented as being retracted across all resources through which they were available. There was no platform that consistently met or failed to meet all of [the Committee on Publication Ethics (COPE)’s] guidelines.” Can you say more about these findings, and the challenges they may pose?
Caitlin Bakker: An individual could choose a number of different platforms through which to access an article, depending on their discipline, institutional affiliations and associated subscriptions, and personal preferences. None of the platforms we studied met all of COPE’s guidelines for the articles within our sample. Platforms failing to identify retractions is problematic considering expectations of scholars and organizations like Cochrane, which is considered by many to be the gold standard in systematic reviews and other knowledge synthesis activities. Specifically C48 in Handbook 6.4.10 declares it mandatory for Cochrane review authors to “[e]xamine relevant retraction statements and errata for information” and specifically to potentially exclude flawed studies. They advise: “Care should be taken to ensure that this information is retrieved in all database searches by downloading the appropriate fields, together with the citation data.” Our research reveals that this advice could be problematic because unfortunately databases do not always identify retracted articles in the appropriate fields.
RW: You used our site to retrieve records of recent retractions, but the notices we cover on the website are not comprehensive — given that hundreds are issued each year, we are unable to cover all of them, and often focus on those that appear somewhat unusual or problematic. Are you concerned this may skew your findings one way or another?
CB: Our goal in developing the sample was to identify publications that we could verify had been retracted rather than to develop a comprehensive picture of retracted publications. One of the reasons we chose to focus on mental health literature was because of the diversity of resources and researchers involved. The articles in our sample were in disciplines ranging from social psychology to neuroscience, and the resources were equally broad. Previous research has often drawn its sample from the resources themselves, asking “how many retraction notices are in PubMed?” for example. Because we were interested in seeing if and how retracted publications were represented as such across platforms, we did not want to draw our sample from the platforms themselves. We did not consider the substance of the retraction notice or the rationale behind the retraction, but rather simply whether or not the retracted status of the publication was clearly communicated through various platforms.
RW: You note that there may be potential impacts on patient care of the inconsistent labeling of retractions in mental health research. What might those be?
CB: One of the examples in our article regarding a pharmacotherapy to treat alcohol dependence in adolescents. In that case, we see clearly that the stakes can be quite high: We have a vulnerable population dealing with an issue that can have significant medical, social and psychological consequences that would be treated with medication. A researcher or practitioner may use this evidence when considering treatment options or research projects. If that article is not consistently labelled as being retracted, it is possible for the user to retrieve it and incorporate it into their decision-making process and believe that they are basing this treatment on the best available evidence.
RW: As you note, we are developing a database of retractions. You write that “databases such as these have been criticized as ‘impractical’ in that they require additional searching on the part of researchers in addition to the initial identification of articles.” This is a fair point; as you may know, we plan to offer an API that integrates our data into commonly used bibliographic software. Would that alleviate your concerns?
CB: We are strongly in favour of interoperability between systems. The primary concern regarding databases was whether those additional steps would be taken by the individual researcher, although we also feel that placing the onus on the end-user to verify publication data rather than on the data providers is problematic. We welcome any tools or processes which can facilitate correcting the scholarly record in a timely and transparent fashion.
RW: You note: “Libraries, which provide access to and training in these resources, have a responsibility to raise awareness of these inconsistencies and to advocate for more timely and accurate metadata.” Why do you think libraries should take the lead here?
CB: Libraries are deeply committed to issues surrounding information literacy and critical appraisal of research. We educate students to effectively read and apply scholarly information, and we support our faculty and researchers not only in meeting their information needs, but also by providing expertise on publishing issues, including predatory or questionable publishers, copyright and authors’ rights, and research impact metrics. Libraries have been committed to and engaged in helping researchers navigate the publishing landscape for decades. Beyond this, we also have an in-depth knowledge of the platforms in question and the ways in which data are ingested and presented, and oftentimes are the units on campus which negotiate licensing for and maintain access to these resources.
Like Retraction Watch? You can make a tax-deductible contribution to support our growth, follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up for an email every time there’s a new post (look for the “follow” button at the lower right part of your screen), or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].
Isn’t this what CrossRef’s CrossMark does? If all of those disparate platforms subscribed to the service they would be automatically updated? (At least, that’s my understanding of it) https://www.crossref.org/services/crossmark/
It’s what CrossMark could do, as we’ve noted since 2011. But CrossMark relies on journals and publishers to use it effectively, and CrossMark does not contain all retractions.