Retraction Watch

Tracking retractions as a window into the scientific process

Archive for the ‘studies about retractions’ Category

20 years of retractions in China: More of them, and more misconduct

with 7 comments

Lei Lei

After reviewing nearly 20 years of retractions from researchers based in China, researchers came up with some somewhat unsurprising (yet still disheartening) findings: The number of retractions has increased (from zero in 1997 to more than 150 in 2016), and approximately 75% were due to some kind of misconduct. (You can read more details in the paper, published this month in Science and Engineering Ethics.) We spoke with first author Lei Lei, based in the School of Foreign Languages at Huazhong University of Science and Technology, about what he thinks can be done to improve research integrity in his country.

Retraction Watch: With “Lack of Improvement” right in the title (“Lack of Improvement in Scientific Integrity: An Analysis of WoS Retractions by Chinese Researchers (1997-2016)”), you sound disappointed with your findings.  What findings did you expect — or at least hope — to find, and what are your reactions to the results you did uncover?

Read the rest of this entry »

Written by Alison McCook

September 27th, 2017 at 8:00 am

Why do researchers commit misconduct? A new preprint offers some clues

with 9 comments

Daniele Fanelli

“Why Do Scientists Fabricate And Falsify Data?” That’s the start of the title of a new preprint posted on bioRxiv this week by researchers whose names Retraction Watch readers will likely find familiar. Daniele Fanelli, Rodrigo Costas, Ferric Fang (a member of the board of directors of our parent non-profit organization), Arturo Casadevall, and Elisabeth Bik have all studied misconduct, retractions, and bias. In the new preprint, they used a set of papers from PLOS ONE shown in earlier research to have included manipulated images to test what factors were linked to such misconduct. The results confirmed some earlier work, but also provided some evidence contradicting previous findings. We spoke to Fanelli by email.

Retraction Watch (RW): This paper builds on a previous study by three of your co-authors, on the rate of inappropriate image manipulation in the literature. Can you explain how it took advantage of those findings, and why that was an important data set? Read the rest of this entry »

Written by Ivan Oransky

April 14th, 2017 at 9:00 am

Stuck in limbo: What happens to papers flagged by journals as potentially problematic?

with 2 comments

Hilda Bastian from the National Library of Medicine

Expressions of concern, as regular Retraction Watch readers will know, are rare but important signals in the scientific record. Neither retractions nor corrections, they alert readers that there may be an issue with a paper, but that the full story is not yet clear. But what ultimately happens to papers flagged by these editorial notices? How often are they eventually retracted or corrected, and how often do expressions of concern linger indefinitely? Hilda Bastian and two colleagues from the U.S. National Library of Medicine, which runs PubMed, recently set out to try to answer those questions. We talked to her about the project by email.

Retraction Watch (RW): The National Library of Medicine recently decided to index expressions of concern, which it hadn’t before. Why the change? Read the rest of this entry »

Written by Ivan Oransky

February 28th, 2017 at 9:34 am

Does social psychology really have a retraction problem?

without comments

Armin Günther

That’s the question posed by Armin Günther at the Leibniz Institute for Psychology Information in Germany in a recent presentation. There is some evidence to suggest that psychology overall has a problem — the number of retractions has increased four-fold since 1989, and some believe the literature is plagued with errors. Social psychologist Diederik Stapel is number three on our leaderboard, with 58 retractions.

But does any particular field have more retractions, on average, than others? Günther examines some trends and provides his thoughts on the state of the field. Take a look at his presentation (we recommend switching to full-screen view): Read the rest of this entry »

Written by Alison McCook

January 11th, 2017 at 9:30 am

Retractions holding steady at more than 650 in FY2016

with one comment

pubmedDrumroll please.

The tally of retractions in MEDLINE — one of the world’s largest databases of scientific abstracts — for the last fiscal year has just been released, and the number is: 664.

Earlier this year, we scratched our heads over the data from 2015, which showed retractions had risen dramatically, to 684. The figures for this fiscal year — which ended in September — have held relatively steadily at that higher number, only dropping by 3%. (For some sense of scale, there were just shy of 870,000 new abstracts indexed in MEDLINE in FY2016; 664 is a tiny fraction of this figure, and of course not all of the retractions were of papers published in FY2016.)

Of note: In FY2014, there were fewer than 500 retractions — creating an increase of nearly 40% between 2014 and 2015. (Meanwhile, the number of citations indexed by MEDLINE rose only few percentage points over the same time period.) Which means the retraction rate in the last two years is dramatically higher than in 2014.

We have often wondered whether the retraction rate would ever reach a plateau, as the community’s ability to find problems in the literature catches up with the amount of problems present in the literature. But based on two years of data, we can’t say anything definitive about that.

Here’s an illustration of retraction data from recent years:

Read the rest of this entry »

Written by Alison McCook

December 5th, 2016 at 11:40 am

What do retractions look like in Korean journals?

with 2 comments

plos-one-better-sizeA new analysis of retractions from Korean journals reveals some interesting trends.

For one, the authors found most papers in Korean journals are retracted for duplication (57%), a higher rate than what’s been reported in other studies. The authors also deemed some retractions were “inappropriate” according to guidelines established by the Committee on Publication Ethics (COPE) — for instance, retracting the article another paper duplicated from, or pulling a paper when an erratum would have sufficed.

One sentence from “Characteristics of Retractions from Korean Medical Journals in the KoreaMed Database: A Bibliometric Analysis,” however, particularly struck us:  Read the rest of this entry »

Written by Alison McCook

October 18th, 2016 at 9:30 am

MEDLINE/PubMed will stop identifying partial retractions. Here’s why.

with 2 comments

pubmedRetraction Watch readers may be familiar with partial retractions. They’re rare, and not always appreciated: The Committee on Publication Ethics (COPE) says that “they’re not helpful because they make it difficult for readers to determine the status of the article and which parts may be relied upon.”

Today, the U.S. National Library of Medicine (NLM), which runs MEDLINE/PubMed, announced that the vast database of scholarly literature abstracts is no longer going to identify partial retractions.

We spoke to NLM’s David Gillikin about the change: Read the rest of this entry »

Written by Ivan Oransky

September 29th, 2016 at 11:30 am

What publishers and countries do most retractions for fake peer review come from?

with 7 comments

1092-coverA new analysis — which included scanning Retraction Watch posts — has identified some trends in papers pulled for fake peer review, a subject we’ve covered at length.

For those who aren’t familiar, fake reviews arise when researchers associated with the paper in question (most often authors) create email addresses for reviewers, enabling them to write their own positive reviews.

The article — released September 23 by the Postgraduate Medical Journal — found the vast majority of papers were retracted from journals with impact factors below 5, and most included co-authors based in China.

As described in the paper, “Characteristics of retractions related to faked peer reviews: an overview,” the authors searched Retraction Watch as well as various databases such as PubMed and Google Scholar, along with other media reports, and found 250 retractions for fake peer review.  (Since the authors concluded their analysis, the number of retractions due to faked reviews has continued to pile up; our latest tally is now 324.)

Here are the authors’ main findings: Read the rest of this entry »

Written by Alison McCook

September 27th, 2016 at 9:32 am

PLOS ONE’s correction rate is higher than average. Why?

with 17 comments


When a high-profile psychologist reviewed her newly published paper in PLOS ONE, she was dismayed to notice multiple formatting errors.

So she contacted the journal to find out what had gone wrong, especially since checking the page proofs would have spotted the problem immediately. The authors were surprised to learn that it was against the journal’s policy to provide authors page proofs. Could this partly explain PLOS ONE’s high rate of corrections?

Issuing frequent corrections isn’t necessarily a bad thing, since it can indicate that the journal is responsive to fixing published articles. But the rate of corrections at PLOS ONE is notably high. Read the rest of this entry »

Written by Dalmeet Singh Chawla

August 5th, 2016 at 1:05 pm

How to better flag retractions? Here’s what PubMed is trying

with 12 comments

Hilda Bastian

Hilda Bastian from the National Library of Medicine

If you’ve searched recently for retracted articles in PubMed — the U.S. National Library of Medicine’s database of scientific abstracts — you may have noticed something new.

In fact, you may have had trouble ignoring it, which is sort of the point. “It” is a large salmon banner that looks something like this: Read the rest of this entry »

Written by Ivan Oransky

July 20th, 2016 at 11:30 am