Retraction Watch

Tracking retractions as a window into the scientific process

Archive for the ‘studies about retractions’ Category

Does social psychology really have a retraction problem?

without comments

Armin Günther

That’s the question posed by Armin Günther at the Leibniz Institute for Psychology Information in Germany in a recent presentation. There is some evidence to suggest that psychology overall has a problem — the number of retractions has increased four-fold since 1989, and some believe the literature is plagued with errors. Social psychologist Diederik Stapel is number three on our leaderboard, with 58 retractions.

But does any particular field have more retractions, on average, than others? Günther examines some trends and provides his thoughts on the state of the field. Take a look at his presentation (we recommend switching to full-screen view): Read the rest of this entry »

Written by Alison McCook

January 11th, 2017 at 9:30 am

Retractions holding steady at more than 650 in FY2016

with one comment

pubmedDrumroll please.

The tally of retractions in MEDLINE — one of the world’s largest databases of scientific abstracts — for the last fiscal year has just been released, and the number is: 664.

Earlier this year, we scratched our heads over the data from 2015, which showed retractions had risen dramatically, to 684. The figures for this fiscal year — which ended in September — have held relatively steadily at that higher number, only dropping by 3%. (For some sense of scale, there were just shy of 870,000 new abstracts indexed in MEDLINE in FY2016; 664 is a tiny fraction of this figure, and of course not all of the retractions were of papers published in FY2016.)

Of note: In FY2014, there were fewer than 500 retractions — creating an increase of nearly 40% between 2014 and 2015. (Meanwhile, the number of citations indexed by MEDLINE rose only few percentage points over the same time period.) Which means the retraction rate in the last two years is dramatically higher than in 2014.

We have often wondered whether the retraction rate would ever reach a plateau, as the community’s ability to find problems in the literature catches up with the amount of problems present in the literature. But based on two years of data, we can’t say anything definitive about that.

Here’s an illustration of retraction data from recent years:

Read the rest of this entry »

Written by Alison McCook

December 5th, 2016 at 11:40 am

What do retractions look like in Korean journals?

with 2 comments

plos-one-better-sizeA new analysis of retractions from Korean journals reveals some interesting trends.

For one, the authors found most papers in Korean journals are retracted for duplication (57%), a higher rate than what’s been reported in other studies. The authors also deemed some retractions were “inappropriate” according to guidelines established by the Committee on Publication Ethics (COPE) — for instance, retracting the article another paper duplicated from, or pulling a paper when an erratum would have sufficed.

One sentence from “Characteristics of Retractions from Korean Medical Journals in the KoreaMed Database: A Bibliometric Analysis,” however, particularly struck us:  Read the rest of this entry »

Written by Alison McCook

October 18th, 2016 at 9:30 am

MEDLINE/PubMed will stop identifying partial retractions. Here’s why.

with one comment

pubmedRetraction Watch readers may be familiar with partial retractions. They’re rare, and not always appreciated: The Committee on Publication Ethics (COPE) says that “they’re not helpful because they make it difficult for readers to determine the status of the article and which parts may be relied upon.”

Today, the U.S. National Library of Medicine (NLM), which runs MEDLINE/PubMed, announced that the vast database of scholarly literature abstracts is no longer going to identify partial retractions.

We spoke to NLM’s David Gillikin about the change: Read the rest of this entry »

Written by Ivan Oransky

September 29th, 2016 at 11:30 am

What publishers and countries do most retractions for fake peer review come from?

with 7 comments

1092-coverA new analysis — which included scanning Retraction Watch posts — has identified some trends in papers pulled for fake peer review, a subject we’ve covered at length.

For those who aren’t familiar, fake reviews arise when researchers associated with the paper in question (most often authors) create email addresses for reviewers, enabling them to write their own positive reviews.

The article — released September 23 by the Postgraduate Medical Journal — found the vast majority of papers were retracted from journals with impact factors below 5, and most included co-authors based in China.

As described in the paper, “Characteristics of retractions related to faked peer reviews: an overview,” the authors searched Retraction Watch as well as various databases such as PubMed and Google Scholar, along with other media reports, and found 250 retractions for fake peer review.  (Since the authors concluded their analysis, the number of retractions due to faked reviews has continued to pile up; our latest tally is now 324.)

Here are the authors’ main findings: Read the rest of this entry »

Written by Alison McCook

September 27th, 2016 at 9:32 am

PLOS ONE’s correction rate is higher than average. Why?

with 17 comments

PLOS One

When a high-profile psychologist reviewed her newly published paper in PLOS ONE, she was dismayed to notice multiple formatting errors.

So she contacted the journal to find out what had gone wrong, especially since checking the page proofs would have spotted the problem immediately. The authors were surprised to learn that it was against the journal’s policy to provide authors page proofs. Could this partly explain PLOS ONE’s high rate of corrections?

Issuing frequent corrections isn’t necessarily a bad thing, since it can indicate that the journal is responsive to fixing published articles. But the rate of corrections at PLOS ONE is notably high. Read the rest of this entry »

Written by Dalmeet Singh Chawla

August 5th, 2016 at 1:05 pm

How to better flag retractions? Here’s what PubMed is trying

with 12 comments

Hilda Bastian

Hilda Bastian from the National Library of Medicine

If you’ve searched recently for retracted articles in PubMed — the U.S. National Library of Medicine’s database of scientific abstracts — you may have noticed something new.

In fact, you may have had trouble ignoring it, which is sort of the point. “It” is a large salmon banner that looks something like this: Read the rest of this entry »

Written by Ivan Oransky

July 20th, 2016 at 11:30 am

Vast majority of Americans want to criminalize data fraud, says new study

with 19 comments

court caseAs Retraction Watch readers know, criminal sanctions for research fraud are extremely rare. There have been just a handful of cases — Dong-Pyou Han, Eric Poehlman, and Scott Reuben, to name several — that have led to prison sentences.

According to a new study, however, the rarity of such cases is out of sync with with the wishes of the U.S. population:
Read the rest of this entry »

Written by Ivan Oransky

July 11th, 2016 at 9:30 am

Weekend reads: Unscientific peer review; impact factor revolt; men love to cite themselves

with one comment

booksThe week at Retraction Watch featured a puzzle, and the retraction of a controversial study on fracking. Here’s what was happening elsewhere: Read the rest of this entry »

Written by Ivan Oransky

July 9th, 2016 at 9:30 am

Context matters when replicating experiments, argues study

with one comment

PNASBackground factors such as culture, location, population, or time of day affect the success rates of replication experiments, a new study suggests.

The study, published today in the Proceedings of the National Academy of Sciences, used data from the psychology replication project, which found only 39 out of 100 experiments live up to their original claims. The authors conclude that more “contextually sensitive” papers — those whose background factors are more likely to affect their replicability — are slightly less likely to be reproduced successfully.

They summarize their results in the paper:

Read the rest of this entry »

Written by Dalmeet Singh Chawla

May 23rd, 2016 at 3:00 pm