Archive for the ‘studies about retractions’ Category
That’s the question posed by Armin Günther at the Leibniz Institute for Psychology Information in Germany in a recent presentation. There is some evidence to suggest that psychology overall has a problem — the number of retractions has increased four-fold since 1989, and some believe the literature is plagued with errors. Social psychologist Diederik Stapel is number three on our leaderboard, with 58 retractions.
But does any particular field have more retractions, on average, than others? Günther examines some trends and provides his thoughts on the state of the field. Take a look at his presentation (we recommend switching to full-screen view): Read the rest of this entry »
The tally of retractions in MEDLINE — one of the world’s largest databases of scientific abstracts — for the last fiscal year has just been released, and the number is: 664.
Earlier this year, we scratched our heads over the data from 2015, which showed retractions had risen dramatically, to 684. The figures for this fiscal year — which ended in September — have held relatively steadily at that higher number, only dropping by 3%. (For some sense of scale, there were just shy of 870,000 new abstracts indexed in MEDLINE in FY2016; 664 is a tiny fraction of this figure, and of course not all of the retractions were of papers published in FY2016.)
Of note: In FY2014, there were fewer than 500 retractions — creating an increase of nearly 40% between 2014 and 2015. (Meanwhile, the number of citations indexed by MEDLINE rose only few percentage points over the same time period.) Which means the retraction rate in the last two years is dramatically higher than in 2014.
We have often wondered whether the retraction rate would ever reach a plateau, as the community’s ability to find problems in the literature catches up with the amount of problems present in the literature. But based on two years of data, we can’t say anything definitive about that.
Here’s an illustration of retraction data from recent years:
A new analysis of retractions from Korean journals reveals some interesting trends.
For one, the authors found most papers in Korean journals are retracted for duplication (57%), a higher rate than what’s been reported in other studies. The authors also deemed some retractions were “inappropriate” according to guidelines established by the Committee on Publication Ethics (COPE) — for instance, retracting the article another paper duplicated from, or pulling a paper when an erratum would have sufficed.
One sentence from “Characteristics of Retractions from Korean Medical Journals in the KoreaMed Database: A Bibliometric Analysis,” however, particularly struck us: Read the rest of this entry »
Retraction Watch readers may be familiar with partial retractions. They’re rare, and not always appreciated: The Committee on Publication Ethics (COPE) says that “they’re not helpful because they make it difficult for readers to determine the status of the article and which parts may be relied upon.”
Today, the U.S. National Library of Medicine (NLM), which runs MEDLINE/PubMed, announced that the vast database of scholarly literature abstracts is no longer going to identify partial retractions.
We spoke to NLM’s David Gillikin about the change: Read the rest of this entry »
For those who aren’t familiar, fake reviews arise when researchers associated with the paper in question (most often authors) create email addresses for reviewers, enabling them to write their own positive reviews.
The article — released September 23 by the Postgraduate Medical Journal — found the vast majority of papers were retracted from journals with impact factors below 5, and most included co-authors based in China.
As described in the paper, “Characteristics of retractions related to faked peer reviews: an overview,” the authors searched Retraction Watch as well as various databases such as PubMed and Google Scholar, along with other media reports, and found 250 retractions for fake peer review. (Since the authors concluded their analysis, the number of retractions due to faked reviews has continued to pile up; our latest tally is now 324.)
Here are the authors’ main findings: Read the rest of this entry »
When a high-profile psychologist reviewed her newly published paper in PLOS ONE, she was dismayed to notice multiple formatting errors.
So she contacted the journal to find out what had gone wrong, especially since checking the page proofs would have spotted the problem immediately. The authors were surprised to learn that it was against the journal’s policy to provide authors page proofs. Could this partly explain PLOS ONE’s high rate of corrections?
Issuing frequent corrections isn’t necessarily a bad thing, since it can indicate that the journal is responsive to fixing published articles. But the rate of corrections at PLOS ONE is notably high. Read the rest of this entry »
If you’ve searched recently for retracted articles in PubMed — the U.S. National Library of Medicine’s database of scientific abstracts — you may have noticed something new.
In fact, you may have had trouble ignoring it, which is sort of the point. “It” is a large salmon banner that looks something like this: Read the rest of this entry »
As Retraction Watch readers know, criminal sanctions for research fraud are extremely rare. There have been just a handful of cases — Dong-Pyou Han, Eric Poehlman, and Scott Reuben, to name several — that have led to prison sentences.
According to a new study, however, the rarity of such cases is out of sync with with the wishes of the U.S. population:
Read the rest of this entry »
The study, published today in the Proceedings of the National Academy of Sciences, used data from the psychology replication project, which found only 39 out of 100 experiments live up to their original claims. The authors conclude that more “contextually sensitive” papers — those whose background factors are more likely to affect their replicability — are slightly less likely to be reproduced successfully.
They summarize their results in the paper: