Many journals are adopting a recently developed mechanism for correcting the scientific record known as “retract and replace” — usually employed when the original paper has been affected by honest errors. But if an article is retracted and replaced, can readers always tell? To find out, Ana Marušić at the University of Split School of Medicine in Croatia and her colleagues reviewed 29 “Corrected and Republished Articles” issued between January, 2015 and December, 2016, noting how they were marked by Web of Science, Scopus, and the journals themselves. They report their findings today in The Lancet.
Retraction Watch: You found some inconsistencies in how articles are handled by journals and other databases. What were the most surprising and/or troubling to you?
Ana Marušić: The most troubling were a few cases of articles that were retracted because of an error and for which a corrected version was published. The journals published an accompanying notice about the reasons for retraction and republication, and some even published the article with the changes indicated. However, they kept the same DOI as for the retracted article. According to the indexing specialists, this is not the proper way of marking different versions of the published record. Therefore, the National Library of Medicine (NLM) considers such articles as retracted, instead of “corrected and republished articles,” which is one of the standard tags in PubMed. This means that, when you search for these articles, you will see them as “retracted articles” (written on a big pink banner at the top of the page), although the version that is recorded presents a valid piece of research.
RW: As people have noted, many types of articles — even straightforward retractions — are not always marked consistently across different platforms. Is it particularly important for different platforms to handle “retract and replace” consistently and/or transparently? If so, why?
AM: Today, most of us search for information using bibliographical databases, and not by reading individual journals. If we are not sure which is the reliable source of information, or if we are not sure which version to read and use (cite), than the whole scientific information system becomes unreliable. Which information can I trust? Do I have to be an expert in bibliographical indexing to be able to find relevant and reliable information? Inconsistent indexing of corrections (or any other version of the published record) is especially problematic in evidence synthesis, such as systematic reviews, because it retrieves seemingly different records, which are difficult to de-duplicate and may increase bias.
RW: As you noted, JAMA and some other journals use the same doi for the original and replacement articles. What do you think about that?
AM: While their practice is honorable towards the authors because it preserves the continuity of their original publication, it is not correct from the point of view of current standards to keep track of the published record. A published record (article) was corrected, and this has to be clearly indicated so that we can see the changes and make our own critical appraisal of the information. The visibility of changes is important. Take the example of clinical trial registries, such as ClinicalTrials.gov, where we can find not only the original registration of a trial but also all subsequent changes to this record until the trial’s end and beyond. This is important because we can then check whether there were any changes to the important aspects of trials, such as primary outcomes, and whether a submitted manuscript suffers from selective reporting bias.
RW: You write: “We respect the intent to honour authors’ self-corrections and preserve their reputation, but clarity of the scientific record and evidence is more important to differentiate among different types of retractions and honour honest corrections. Different terms for specific corrections should be developed rather than changing the whole system of correcting the scientific record.” Can you say more about that? What different terms are you thinking of, specifically?
AM: I think that the problem of corrected and republished articles is in the use of the term “retraction”. The term is used both for withdrawals of published articles because of misconduct and because of honest errors. A specific term could be developed for articles that need to be completely replaced because of a pervasive error, which may result in new conclusions of the article. I think that the system of “corrected and republished” articles developed by NLM is a good standard that clearly indicates what happened and links the version of the article to clearly indicate their sequence. Indexing databases should work on common standards and, which I think is even more important, ensure that these standards are consistently applied. Or, to use another analogy from health, the information technology systems used in scientific publishing have become so diverse that they may need a new tool to communicate with each other and to then interpret the findings to the users.
Like Retraction Watch? You can make a tax-deductible contribution to support our growth, follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up for an email every time there’s a new post (look for the “follow” button at the lower right part of your screen), or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].
Regarding the resilient DOIs for retracted and re-published articles, I reported a related issue [1], not for the article itself, but for the metadata connected to the files deposited as supplementary material with a retracted/republished article: in that case, the reference codes generated by the Cambridge Crystallographic Data Centre [2] for crystal structures of small molecules.
My impression is that nowadays, retracting something from the literature is a much more complicated process than merely publish a notice in the Journal, with the hope that readers will spot the flag (directly in the Journal or via a bibliographical database). Some solutions are available, like Crossmark [3], although they are technically difficult to implement.
Not everything is negative, however. Just imagine what would happen if, suddenly, all DOIs disappeared…
[1] Please see: https://pubpeer.com/publications/886E1EEB102D984DDE1FF8C388218B
[2] https://www.ccdc.cam.ac.uk/theccdcprofile/
[3] https://www.crossref.org/services/crossmark/