Why do researchers commit misconduct? A new preprint offers some clues

Daniele Fanelli

“Why Do Scientists Fabricate And Falsify Data?” That’s the start of the title of a new preprint posted on bioRxiv this week by researchers whose names Retraction Watch readers will likely find familiar. Daniele Fanelli, Rodrigo Costas, Ferric Fang (a member of the board of directors of our parent non-profit organization), Arturo Casadevall, and Elisabeth Bik have all studied misconduct, retractions, and bias. In the new preprint, they used a set of papers from PLOS ONE shown in earlier research to have included manipulated images to test what factors were linked to such misconduct. The results confirmed some earlier work, but also provided some evidence contradicting previous findings. We spoke to Fanelli by email.

Retraction Watch (RW): This paper builds on a previous study by three of your co-authors, on the rate of inappropriate image manipulation in the literature. Can you explain how it took advantage of those findings, and why that was an important data set? Continue reading Why do researchers commit misconduct? A new preprint offers some clues

Stuck in limbo: What happens to papers flagged by journals as potentially problematic?

Hilda Bastian from the National Library of Medicine

Expressions of concern, as regular Retraction Watch readers will know, are rare but important signals in the scientific record. Neither retractions nor corrections, they alert readers that there may be an issue with a paper, but that the full story is not yet clear. But what ultimately happens to papers flagged by these editorial notices? How often are they eventually retracted or corrected, and how often do expressions of concern linger indefinitely? Hilda Bastian and two colleagues from the U.S. National Library of Medicine, which runs PubMed, recently set out to try to answer those questions. We talked to her about the project by email.

Retraction Watch (RW): The National Library of Medicine recently decided to index expressions of concern, which it hadn’t before. Why the change? Continue reading Stuck in limbo: What happens to papers flagged by journals as potentially problematic?

Does social psychology really have a retraction problem?

Armin Günther

That’s the question posed by Armin Günther at the Leibniz Institute for Psychology Information in Germany in a recent presentation. There is some evidence to suggest that psychology overall has a problem — the number of retractions has increased four-fold since 1989, and some believe the literature is plagued with errors. Social psychologist Diederik Stapel is number three on our leaderboard, with 58 retractions.

But does any particular field have more retractions, on average, than others? Günther examines some trends and provides his thoughts on the state of the field. Take a look at his presentation (we recommend switching to full-screen view): Continue reading Does social psychology really have a retraction problem?

Retractions holding steady at more than 650 in FY2016

pubmedDrumroll please.

The tally of retractions in MEDLINE — one of the world’s largest databases of scientific abstracts — for the last fiscal year has just been released, and the number is: 664.

Earlier this year, we scratched our heads over the data from 2015, which showed retractions had risen dramatically, to 684. The figures for this fiscal year — which ended in September — have held relatively steadily at that higher number, only dropping by 3%. (For some sense of scale, there were just shy of 870,000 new abstracts indexed in MEDLINE in FY2016; 664 is a tiny fraction of this figure, and of course not all of the retractions were of papers published in FY2016.)

Of note: In FY2014, there were fewer than 500 retractions — creating an increase of nearly 40% between 2014 and 2015. (Meanwhile, the number of citations indexed by MEDLINE rose only few percentage points over the same time period.) Which means the retraction rate in the last two years is dramatically higher than in 2014.

We have often wondered whether the retraction rate would ever reach a plateau, as the community’s ability to find problems in the literature catches up with the amount of problems present in the literature. But based on two years of data, we can’t say anything definitive about that.

Here’s an illustration of retraction data from recent years:

Continue reading Retractions holding steady at more than 650 in FY2016

What do retractions look like in Korean journals?

plos-one-better-sizeA new analysis of retractions from Korean journals reveals some interesting trends.

For one, the authors found most papers in Korean journals are retracted for duplication (57%), a higher rate than what’s been reported in other studies. The authors also deemed some retractions were “inappropriate” according to guidelines established by the Committee on Publication Ethics (COPE) — for instance, retracting the article another paper duplicated from, or pulling a paper when an erratum would have sufficed.

One sentence from “Characteristics of Retractions from Korean Medical Journals in the KoreaMed Database: A Bibliometric Analysis,” however, particularly struck us:  Continue reading What do retractions look like in Korean journals?

MEDLINE/PubMed will stop identifying partial retractions. Here’s why.

pubmedRetraction Watch readers may be familiar with partial retractions. They’re rare, and not always appreciated: The Committee on Publication Ethics (COPE) says that “they’re not helpful because they make it difficult for readers to determine the status of the article and which parts may be relied upon.”

Today, the U.S. National Library of Medicine (NLM), which runs MEDLINE/PubMed, announced that the vast database of scholarly literature abstracts is no longer going to identify partial retractions.

We spoke to NLM’s David Gillikin about the change: Continue reading MEDLINE/PubMed will stop identifying partial retractions. Here’s why.

What publishers and countries do most retractions for fake peer review come from?

1092-coverA new analysis — which included scanning Retraction Watch posts — has identified some trends in papers pulled for fake peer review, a subject we’ve covered at length.

For those who aren’t familiar, fake reviews arise when researchers associated with the paper in question (most often authors) create email addresses for reviewers, enabling them to write their own positive reviews.

The article — released September 23 by the Postgraduate Medical Journal — found the vast majority of papers were retracted from journals with impact factors below 5, and most included co-authors based in China.

As described in the paper, “Characteristics of retractions related to faked peer reviews: an overview,” the authors searched Retraction Watch as well as various databases such as PubMed and Google Scholar, along with other media reports, and found 250 retractions for fake peer review.  (Since the authors concluded their analysis, the number of retractions due to faked reviews has continued to pile up; our latest tally is now 324.)

Here are the authors’ main findings: Continue reading What publishers and countries do most retractions for fake peer review come from?

PLOS ONE’s correction rate is higher than average. Why?

PLOS One

When a high-profile psychologist reviewed her newly published paper in PLOS ONE, she was dismayed to notice multiple formatting errors.

So she contacted the journal to find out what had gone wrong, especially since checking the page proofs would have spotted the problem immediately. The authors were surprised to learn that it was against the journal’s policy to provide authors page proofs. Could this partly explain PLOS ONE’s high rate of corrections?

Issuing frequent corrections isn’t necessarily a bad thing, since it can indicate that the journal is responsive to fixing published articles. But the rate of corrections at PLOS ONE is notably high. Continue reading PLOS ONE’s correction rate is higher than average. Why?

How to better flag retractions? Here’s what PubMed is trying

Hilda Bastian
Hilda Bastian from the National Library of Medicine

If you’ve searched recently for retracted articles in PubMed — the U.S. National Library of Medicine’s database of scientific abstracts — you may have noticed something new.

In fact, you may have had trouble ignoring it, which is sort of the point. “It” is a large salmon banner that looks something like this: Continue reading How to better flag retractions? Here’s what PubMed is trying

Vast majority of Americans want to criminalize data fraud, says new study

court caseAs Retraction Watch readers know, criminal sanctions for research fraud are extremely rare. There have been just a handful of cases — Dong-Pyou Han, Eric Poehlman, and Scott Reuben, to name several — that have led to prison sentences.

According to a new study, however, the rarity of such cases is out of sync with with the wishes of the U.S. population:
Continue reading Vast majority of Americans want to criminalize data fraud, says new study