Nowadays, there are many ways to access a paper — on the publisher’s website, on MEDLINE, PubMed, Web of Science, Scopus, and other outlets. So when the publisher retracts a paper, do these outlets consistently mark it as such? And if they don’t, what’s the impact? Researchers Caitlin Bakker and Amy Riegelman at the University of Minnesota surveyed more than one hundred retractions in mental health research to try to get at some answers, and published their findings in the Journal of Librarianship and Scholarly Communication. We spoke to Bakker about the potential harm to patients when clinicians don’t receive consistent notifications about retracted data.
Retraction Watch: You note: “Of the 144 articles studied, only 10 were represented as being retracted across all resources through which they were available. There was no platform that consistently met or failed to meet all of [the Committee on Publication Ethics (COPE)’s] guidelines.” Can you say more about these findings, and the challenges they may pose?
After reviewing nearly 20 years of retractions from researchers based in China, researchers came up with some somewhat unsurprising (yet still disheartening) findings: The number of retractions has increased (from zero in 1997 to more than 150 in 2016), and approximately 75% were due to some kind of misconduct. (You can read more details in the paper, published this month in Science and Engineering Ethics.) We spoke with first author Lei Lei, based in the School of Foreign Languages at Huazhong University of Science and Technology, about what he thinks can be done to improve research integrity in his country.
“Why Do Scientists Fabricate And Falsify Data?” That’s the start of the title of a new preprint posted on bioRxiv this week by researchers whose names Retraction Watch readers will likely find familiar. Daniele Fanelli, Rodrigo Costas, Ferric Fang (a member of the board of directors of our parent non-profit organization), Arturo Casadevall, and Elisabeth Bik have all studied misconduct, retractions, and bias. In the new preprint, they used a set of papers from PLOS ONE shown in earlier research to have included manipulated images to test what factors were linked to such misconduct. The results confirmed some earlier work, but also provided some evidence contradicting previous findings. We spoke to Fanelli by email.
Expressions of concern, as regular Retraction Watch readers will know, are rare but important signals in the scientific record. Neither retractions nor corrections, they alert readers that there may be an issue with a paper, but that the full story is not yet clear. But what ultimately happens to papers flagged by these editorial notices? How often are they eventually retracted or corrected, and how often do expressions of concern linger indefinitely? Hilda Bastian and two colleagues from the U.S. National Library of Medicine, which runs PubMed, recently set out to try to answer those questions. We talked to her about the project by email.
The tally of retractions in MEDLINE — one of the world’s largest databases of scientific abstracts — for the last fiscal year has just been released, and the number is: 664.
Earlier this year, we scratched our heads over the data from 2015, which showed retractions had risen dramatically, to 684. The figures for this fiscal year — which ended in September — have held relatively steadily at that higher number, only dropping by 3%. (For some sense of scale, there were just shy of 870,000 new abstracts indexed in MEDLINE in FY2016; 664 is a tiny fraction of this figure, and of course not all of the retractions were of papers published in FY2016.)
Of note: In FY2014, there were fewer than 500 retractions — creating an increase of nearly 40% between 2014 and 2015. (Meanwhile, the number of citations indexed by MEDLINE rose only few percentage points over the same time period.) Which means the retraction rate in the last two years is dramatically higher than in 2014.
We have often wondered whether the retraction rate would ever reach a plateau, as the community’s ability to find problems in the literature catches up with the amount of problems present in the literature. But based on two years of data, we can’t say anything definitive about that.
Here’s an illustration of retraction data from recent years:
For those who aren’t familiar, fake reviews arise when researchers associated with the paper in question (most often authors) create email addresses for reviewers, enabling them to write their own positive reviews.
The article — released September 23 by the Postgraduate Medical Journal — found the vast majority of papers were retracted from journals with impact factors below 5, and most included co-authors based in China.
As described in the paper, “Characteristics of retractions related to faked peer reviews: an overview,” the authors searched Retraction Watch as well as various databases such as PubMed and Google Scholar, along with other media reports, and found 250 retractions for fake peer review. (Since the authors concluded their analysis, the number of retractions due to faked reviews has continued to pile up; our latest tally is now 324.)
So she contacted the journal to find out what had gone wrong, especially since checking the page proofs would have spotted the problem immediately. The authors were surprised to learn that it was against the journal’s policy to provide authors page proofs. Could this partly explain PLOS ONE’s high rate of corrections?