Retraction Watch

Tracking retractions as a window into the scientific process

Archive for the ‘ask retraction watch’ Category

Ask Retraction Watch: Is it OK to cite a retracted paper?

with 4 comments

Photo by Bilal Kamoon via flickr

From our mailbox:

I’m writing regarding a recent query from an author about citation of a retracted article. The author is currently writing up a paper where the initial investigations were at least partially inspired by a paper that has recently been retracted. The author wants to recognise the influence of that work on the new study, but also recognises that – since the paper has been retracted – it would not be appropriate simply to cite it as though it were still a published paper. This isn’t a situation we’ve come across before, and I’m not sure how best to advise the author. Is it acceptable to discuss the findings of that paper provided the text clearly mentions that the paper has since been retracted? And how should this be cited in the reference list – citation to the original paper, to the retraction notice, or not at all? As experts in this area, any guidance you could provide would be greatly appreciated.

Read the rest of this entry »

Written by Ivan Oransky

January 5th, 2018 at 8:00 am

“Ethical ambiguity:” When scientific misconduct isn’t black and white

with 7 comments

David Johnson

Elaine Howard Ecklund

Some types of misconduct are obvious – most researchers would agree cooking data and plagiarizing someone’s work are clear no-nos. But what about overhyping your findings? Using funding allocated to an unrelated project, if it keeps a promising young student afloat? On these so-called “gray” areas of research behavior, people aren’t so clear what to do. A few years ago, David R. Johnson at the University of Nevada Reno and Elaine Howard Ecklund at Rice University interviewed hundreds of physicists; their conclusions appeared recently in Science and Engineering Ethics (and online in 2015).

Retraction Watch: Your paper discusses “ethical ambiguity” – what does that mean? Can you provide examples of such behavior?

Read the rest of this entry »

Written by Alison McCook

April 20th, 2017 at 12:00 pm

Need to find a replication partner, or collaborator? There’s an online platform for that

without comments

Christopher Chartier

Randy McCarthy

Do researchers need a new “Craigslist?” We were recently alerted to a new online platform called StudySwap by one of its creators, who said it was partially inspired by one of our posts. The platform creates an “online marketplace” that previous researchers have called for, connecting scientists with willing partners – such as a team looking for someone to replicate its results, and vice versa. As co-creators Christopher Chartier at Ashland University and Randy McCarthy at Northern Illinois University tell us, having a place where researchers can find each other more efficiently “is in everyone’s best interest.”

Retraction Watch: What inspired you to create StudySwap?

Read the rest of this entry »

Written by Alison McCook

April 19th, 2017 at 10:19 am

Should retractions ever lead to refunds of page charges?

with 5 comments

Recently, a reader contacted us with an interesting scenario: He’d recently heard about an author who asked for a refund of his page charges after he had to retract a paper for an honest error.

The scenario raised questions we’d never considered before. On the one hand, page charges often cover work that was completed in order to publish the paper, such as typesetting, printing, and distribution. That work happened, regardless of whether or not the paper was eventually retracted. On the other hand, researchers often depend on grants to cover publication fees, and if a paper is retracted, they may not be able to charge the grant, leaving them out of pocket.

If there is a fundamental problem with the paper, which the journal could have caught during editing and peer review, does that leave the journal partly responsible to shoulder some of the cost? What about if the article was retracted due to a publishing error, such as the journal posting the wrong version, or the same version twice?

Read the rest of this entry »

Written by Alison McCook

March 28th, 2017 at 11:30 am

Why traditional statistics are often “counterproductive to research the human sciences”

with 10 comments

Andrew Gelman

Doing research is hard. Getting statistically significant results is hard. Making sure the results you obtain reflect reality is even harder. In this week’s Science, Eric Loken at the University of Connecticut and Andrew Gelman at Columbia University debunk some common myths about the use of statistics in research — and argue that, in many cases, the use of traditional statistics does more harm than good in human sciences research. 

Retraction Watch: Your article focuses on the “noise” that’s present in research studies. What is “noise” and how is it created during an experiment?

Read the rest of this entry »

Written by Alison McCook

February 9th, 2017 at 2:00 pm

Watch out for predatory journals, and consider retract/replace, suggests medical journal group

without comments

Darren Taichman

The challenges facing science publishing are ever-evolving, and so too are the recommendations for how to face them. As such, the International Committee of Medical Journal Editors (ICMJE) frequently updates its advice to authors. In December, 2016, it made some notable changes – specifically, asking authors to pay closer attention to where they publish, in order to avoid so-called “predatory” journals, and encouraging more authors to consider “retracting and replacing” a paper with an updated version when the problems stem from honest error (something more journals have been embracing). We spoke with Darren Taichman, Executive Deputy Editor of the Annals of Internal Medicine and Secretary of the ICMJE, about the changes.

Retraction Watch: The first set of recommendations was issued in 1978 — how have they evolved, generally speaking, since then?

Read the rest of this entry »

Written by Alison McCook

January 13th, 2017 at 11:30 am

Dopey dupe retractions: How publisher error hurts researchers

without comments

Adam Etkin

Ivan Oransky

Not all retractions result from researchers’ mistakes — we have an entire category of posts known as “publisher errors,” in which publishers mistakenly post a paper, through no fault of the authors. Yet, those retractions can become a black mark on authors’ record. Our co-founder Ivan Oransky and Adam Etkin, Executive Editor at Springer Publishing Co (unrelated to Springer Nature) propose a new system in the latest issue of the International Society of Managing & Technical Editors newsletter, reprinted with permission below.

Imagine you’re a researcher who is one of 10 candidates being considered for tenure, or a promotion, or perhaps a new job which would significantly advance your career. Now imagine that those making this decision eliminate you as a candidate without even an interview because your record shows you’ve had a paper retracted. But in this particular case, what the decision makers may not be aware of is that the paper was not retracted because you made an honest mistake—which, if you came forward about it, really shouldn’t be a black mark anyway—or even because you did something unethical. It was retracted due to publisher error. Like Han Solo and/or Lando Calrissian, you’d find yourself in utter disbelief while saying “It’s not my fault!”— and you’d be right. Read the rest of this entry »

Written by Ivan Oransky

December 16th, 2016 at 9:30 am

Journal’s new program: Choose your own reviewers – and get a decision in days

with 13 comments

Michael Imperiale

Peer review has numerous problems: Researchers complain it takes too long, but also sometimes that it is not thorough enough, letting obviously flawed papers enter the literature. Authors are often in the best position to know who the best experts are in their field, but how can we be sure they’ll choose someone who won’t just rubber stamp their paper? A new journal – mSphere, an open-access microbial sciences journal only one year old – has proposed a new solution. Early next year, they’re launching a project they call mSphereDirect in order to improve the publication process for authors. We spoke with Mike Imperiale, editor-in-chief at mSphere, about how this system will work.

Retraction Watch: So let’s start with how the program will work, exactly. Can you explain?  Read the rest of this entry »

Written by Alison McCook

December 12th, 2016 at 9:00 am

How a Cell journal weeds out the “bad apples”

with 9 comments

Anne Granger (left) and Nikla Emambokus (right)

Anne Granger (left) and Nikla Emambokus (right)

There are a lot of accusations about research misconduct swirling around, and not every journal handles them the same. Recently, Cell Metabolism Scientific Editor Anne Granger and Cell Metabolism Editor-in-Chief Nikla Emambokus shared some details about their investigative procedure in “Weeding out the Bad Apples.” We talked to them about why they don’t necessarily trust accusations leveled on blogs (including ours), but will consider the concerns of anyone who approaches the journal directly – even anonymously.

Retraction Watch: What made you decide to write an editorial about research fraud now? Read the rest of this entry »

Written by Alison McCook

November 25th, 2016 at 9:30 am

We are judging individuals and institutions unfairly. Here’s what needs to change.

with one comment

Yves Gingras

Yves Gingras

The way we rank individuals and institutions simply does not work, argues Yves Gingras, Canada Research Chair in the History and Sociology of Science, based at the University of Quebec in Montreal. He should know: In 1997, he cofounded the Observatoire des sciences et des technologies, which measures innovation in science and technology, and where he is now scientific director. In 2014, he wrote a book detailing the problems with our current ranking system, which has now been translated into English. Below, he shares some of his conclusions from “Bibliometrics and Research Evaluation: Uses and Abuses.”

Retraction Watch: You equate modern bibliometric rankings of academic performance to the fable about the Emperor’s New Clothes, in which no one dares to tell a leader that he is not wearing an invisible suit – rather, he is naked. Why did you choose that metaphor? Read the rest of this entry »

Written by Alison McCook

November 8th, 2016 at 11:30 am