Retraction Watch

Tracking retractions as a window into the scientific process

Archive for the ‘ask retraction watch’ Category

Why traditional statistics are often “counterproductive to research the human sciences”

with 9 comments

Andrew Gelman

Doing research is hard. Getting statistically significant results is hard. Making sure the results you obtain reflect reality is even harder. In this week’s Science, Eric Loken at the University of Connecticut and Andrew Gelman at Columbia University debunk some common myths about the use of statistics in research — and argue that, in many cases, the use of traditional statistics does more harm than good in human sciences research. 

Retraction Watch: Your article focuses on the “noise” that’s present in research studies. What is “noise” and how is it created during an experiment?

Read the rest of this entry »

Written by Alison McCook

February 9th, 2017 at 2:00 pm

Watch out for predatory journals, and consider retract/replace, suggests medical journal group

without comments

Darren Taichman

The challenges facing science publishing are ever-evolving, and so too are the recommendations for how to face them. As such, the International Committee of Medical Journal Editors (ICMJE) frequently updates its advice to authors. In December, 2016, it made some notable changes – specifically, asking authors to pay closer attention to where they publish, in order to avoid so-called “predatory” journals, and encouraging more authors to consider “retracting and replacing” a paper with an updated version when the problems stem from honest error (something more journals have been embracing). We spoke with Darren Taichman, Executive Deputy Editor of the Annals of Internal Medicine and Secretary of the ICMJE, about the changes.

Retraction Watch: The first set of recommendations was issued in 1978 — how have they evolved, generally speaking, since then?

Read the rest of this entry »

Written by Alison McCook

January 13th, 2017 at 11:30 am

Dopey dupe retractions: How publisher error hurts researchers

without comments

Adam Etkin

Ivan Oransky

Not all retractions result from researchers’ mistakes — we have an entire category of posts known as “publisher errors,” in which publishers mistakenly post a paper, through no fault of the authors. Yet, those retractions can become a black mark on authors’ record. Our co-founder Ivan Oransky and Adam Etkin, Executive Editor at Springer Publishing Co (unrelated to Springer Nature) propose a new system in the latest issue of the International Society of Managing & Technical Editors newsletter, reprinted with permission below.

Imagine you’re a researcher who is one of 10 candidates being considered for tenure, or a promotion, or perhaps a new job which would significantly advance your career. Now imagine that those making this decision eliminate you as a candidate without even an interview because your record shows you’ve had a paper retracted. But in this particular case, what the decision makers may not be aware of is that the paper was not retracted because you made an honest mistake—which, if you came forward about it, really shouldn’t be a black mark anyway—or even because you did something unethical. It was retracted due to publisher error. Like Han Solo and/or Lando Calrissian, you’d find yourself in utter disbelief while saying “It’s not my fault!”— and you’d be right. Read the rest of this entry »

Written by Ivan Oransky

December 16th, 2016 at 9:30 am

Journal’s new program: Choose your own reviewers – and get a decision in days

with 13 comments

Michael Imperiale

Peer review has numerous problems: Researchers complain it takes too long, but also sometimes that it is not thorough enough, letting obviously flawed papers enter the literature. Authors are often in the best position to know who the best experts are in their field, but how can we be sure they’ll choose someone who won’t just rubber stamp their paper? A new journal – mSphere, an open-access microbial sciences journal only one year old – has proposed a new solution. Early next year, they’re launching a project they call mSphereDirect in order to improve the publication process for authors. We spoke with Mike Imperiale, editor-in-chief at mSphere, about how this system will work.

Retraction Watch: So let’s start with how the program will work, exactly. Can you explain?  Read the rest of this entry »

Written by Alison McCook

December 12th, 2016 at 9:00 am

How a Cell journal weeds out the “bad apples”

with 8 comments

Anne Granger (left) and Nikla Emambokus (right)

Anne Granger (left) and Nikla Emambokus (right)

There are a lot of accusations about research misconduct swirling around, and not every journal handles them the same. Recently, Cell Metabolism Scientific Editor Anne Granger and Cell Metabolism Editor-in-Chief Nikla Emambokus shared some details about their investigative procedure in “Weeding out the Bad Apples.” We talked to them about why they don’t necessarily trust accusations leveled on blogs (including ours), but will consider the concerns of anyone who approaches the journal directly – even anonymously.

Retraction Watch: What made you decide to write an editorial about research fraud now? Read the rest of this entry »

Written by Alison McCook

November 25th, 2016 at 9:30 am

We are judging individuals and institutions unfairly. Here’s what needs to change.

with one comment

Yves Gingras

Yves Gingras

The way we rank individuals and institutions simply does not work, argues Yves Gingras, Canada Research Chair in the History and Sociology of Science, based at the University of Quebec in Montreal. He should know: In 1997, he cofounded the Observatoire des sciences et des technologies, which measures innovation in science and technology, and where he is now scientific director. In 2014, he wrote a book detailing the problems with our current ranking system, which has now been translated into English. Below, he shares some of his conclusions from “Bibliometrics and Research Evaluation: Uses and Abuses.”

Retraction Watch: You equate modern bibliometric rankings of academic performance to the fable about the Emperor’s New Clothes, in which no one dares to tell a leader that he is not wearing an invisible suit – rather, he is naked. Why did you choose that metaphor? Read the rest of this entry »

Written by Alison McCook

November 8th, 2016 at 11:30 am

What should you do if a paper you’ve cited is later retracted?

with 12 comments

RW logoWe all know that researchers continue to cite papers long after they’ve been retracted, posing concerns for the integrity of the literature. But what should you do if one of the papers you’ve cited gets retracted after you’ve already cited it?

We posed this question to some members of the board of directors of our parent non-profit organization, who offered up some valuable advice based on many years of experience working at journals and organizations such as the Committee on Publication Ethics (COPE).

The first step: Determine whether the fact a reference has been retracted has any impact on the conclusions of your own paper. From Elizabeth Wager, publications consultant, Sideview; former chair, COPE:

Read the rest of this entry »

Written by Alison McCook

November 1st, 2016 at 2:30 pm

How false information becomes fact: Q&A with Carl Bergstrom

with one comment


Photo credit: Corina Logan

Not every study contains accurate information — but over time, some of those incorrect findings can become canonized as “fact.” How does this happen? And how can we avoid its impact on the scientific research? Author of a study published on arXiv in SeptemberCarl Bergstrom from the University of Washington in Seattle, explains how the fight over information is like a rugby match, with competing sides pushing the ball towards fact or falsehood — and how to help ensure the ball moves in the right direction.

Retraction Watch: What factors play a role in making false statements seem true? Read the rest of this entry »

Written by Dalmeet Singh Chawla

October 5th, 2016 at 11:30 am

Would peer review work better if reviewers talked to each other?

with 15 comments


Katherine Brown

Would distributing all reviewers’ reports for a specific paper amongst every referee before deciding whether to accept or reject a manuscript make peer review fairer and quicker? This idea — called “cross-referee commenting” — is being implemented by the journal Development, as part of its attempt to improve the peer-review process. Katherine Brown, executive editor of Development from Cambridge, UK, who co-authored a recent editorial about the phenomenon, spoke to us about the move. 

Retraction Watch: Many journals share the reviews of a particular paper with those who’ve reviewed it. What is cross-referee commenting in peer review and how is it different from current reviewing processes? Read the rest of this entry »

Written by Dalmeet Singh Chawla

September 21st, 2016 at 9:30 am

What if scientists funded each other?

with 36 comments

Johan Bollen

Johan Bollen

We were struck recently by a paper in Scientometrics that proposed a unique way to fund scientists: Distribute money equally, but require that each scientist donate a portion to others – turning the federal funding system into a crowd-sourcing venture that funds people instead of projects. The proposal could save the inordinate amount of time scientists currently spend writing (and re-writing) grants, but would it actually work? First author Johan Bollen, of Indiana University, explains.

Retraction Watch: You propose something quite unique: Fund everyone equally, but ask them to give a fraction of their funding to someone else. Is the idea that scientists most respected by their peers will “earn” a higher percentage of funding, and everyone is just acting as reviewers? Read the rest of this entry »

Written by Alison McCook

September 20th, 2016 at 9:54 am