I’m writing regarding a recent query from an author about citation of a retracted article. The author is currently writing up a paper where the initial investigations were at least partially inspired by a paper that has recently been retracted. The author wants to recognise the influence of that work on the new study, but also recognises that – since the paper has been retracted – it would not be appropriate simply to cite it as though it were still a published paper. This isn’t a situation we’ve come across before, and I’m not sure how best to advise the author. Is it acceptable to discuss the findings of that paper provided the text clearly mentions that the paper has since been retracted? And how should this be cited in the reference list – citation to the original paper, to the retraction notice, or not at all? As experts in this area, any guidance you could provide would be greatly appreciated.
Some types of misconduct are obvious – most researchers would agree cooking data and plagiarizing someone’s work are clear no-nos. But what about overhyping your findings? Using funding allocated to an unrelated project, if it keeps a promising young student afloat? On these so-called “gray” areas of research behavior, people aren’t so clear what to do. A few years ago, David R. Johnson at the University of Nevada Reno and Elaine Howard Ecklund at Rice University interviewed hundreds of physicists; their conclusions appeared recently in Science and Engineering Ethics (and online in 2015).
Retraction Watch: Your paper discusses “ethical ambiguity” – what does that mean? Can you provide examples of such behavior?
Recently, a reader contacted us with an interesting scenario: He’d recently heard about an author who asked for a refund of his page charges after he had to retract a paper for an honest error.
The scenario raised questions we’d never considered before. On the one hand, page charges often cover work that was completed in order to publish the paper, such as typesetting, printing, and distribution. That work happened, regardless of whether or not the paper was eventually retracted. On the other hand, researchers often depend on grants to cover publication fees, and if a paper is retracted, they may not be able to charge the grant, leaving them out of pocket.
If there is a fundamental problem with the paper, which the journal could have caught during editing and peer review, does that leave the journal partly responsible to shoulder some of the cost? What about if the article was retracted due to a publishing error, such as the journal posting the wrong version, or the same version twice?
Doing research is hard. Getting statistically significant results is hard. Making sure the results you obtain reflect reality is even harder. In this week’s Science,Eric Loken at the University of Connecticut and Andrew Gelman at Columbia University debunk some common myths about the use of statistics in research — and argue that, in many cases, the use of traditional statistics does more harm than good in human sciences research.
Retraction Watch: Your article focuses on the “noise” that’s present in research studies. What is “noise” and how is it created during an experiment?
The challenges facing science publishing are ever-evolving, and so too are the recommendations for how to face them. As such, the International Committee of Medical Journal Editors (ICMJE) frequently updates its advice to authors. In December, 2016, it made some notable changes – specifically, asking authors to pay closer attention to where they publish, in order to avoid so-called “predatory” journals, and encouraging more authors to consider “retracting and replacing” a paper with an updated version when the problems stem from honest error (something more journals have been embracing). We spoke with Darren Taichman, Executive Deputy Editor of the Annals of Internal Medicine and Secretary of the ICMJE, about the changes.
Retraction Watch: The first set of recommendations was issued in 1978 — how have they evolved, generally speaking, since then?
Not all retractions result from researchers’ mistakes — we have an entire category of posts known as “publisher errors,” in which publishers mistakenly post a paper, through no fault of the authors. Yet, those retractions can become a black mark on authors’ record. Our co-founder Ivan Oransky and Adam Etkin, Executive Editor at Springer Publishing Co (unrelated to Springer Nature) propose a new system in the latest issue of the International Society of Managing & Technical Editors newsletter, reprinted with permission below.
Imagine you’re a researcher who is one of 10 candidates being considered for tenure, or a promotion, or perhaps a new job which would significantly advance your career. Now imagine that those making this decision eliminate you as a candidate without even an interview because your record shows you’ve had a paper retracted. But in this particular case, what the decision makers may not be aware of is that the paper was not retracted because you made an honest mistake—which, if you came forward about it, really shouldn’t be a black mark anyway—or even because you did something unethical. It was retracted due to publisher error. Like Han Solo and/or Lando Calrissian, you’d find yourself in utter disbelief while saying “It’s not my fault!”— and you’d be right. Continue reading Dopey dupe retractions: How publisher error hurts researchers
Peer review has numerous problems: Researchers complain it takes too long, but also sometimes that it is not thorough enough, letting obviously flawed papers enter the literature. Authors are often in the best position to know who the best experts are in their field, but how can we be sure they’ll choose someone who won’t just rubber stamp their paper? A new journal – mSphere, an open-access microbial sciences journal only one year old – has proposed a new solution. Early next year, they’re launching a project they call mSphereDirect in order to improve the publication process for authors. We spoke with Mike Imperiale, editor-in-chief at mSphere, about how this system will work.
There are a lot of accusations about research misconduct swirling around, and not every journal handles them the same. Recently, Cell Metabolism Scientific Editor Anne Granger and Cell Metabolism Editor-in-Chief Nikla Emambokus shared some details about their investigative procedure in “Weeding out the Bad Apples.” We talked to them about why they don’t necessarily trust accusations leveled on blogs (including ours), but will consider the concerns of anyone who approaches the journal directly – even anonymously.
The way we rank individuals and institutions simply does not work, argues Yves Gingras, Canada Research Chair in the History and Sociology of Science, based at the University of Quebec in Montreal. He should know: In 1997, he cofounded the Observatoire des sciences et des technologies, which measures innovation in science and technology, and where he is now scientific director. In 2014, he wrote a book detailing the problems with our current ranking system, which has now been translated into English. Below, he shares some of his conclusions from “Bibliometrics and Research Evaluation: Uses and Abuses.”