Three physicists at Imperial College London have retracted a paper on Coulomb collisions, a kind of fender bender between two charged particles, after realizing their equations were written wrong.
The mistake resulted in an erroneous conclusion about the strength of the collisions.
Last month, we reported on a 2012 paper in Interface whose authors had the journal issue an expression of concern about it because of “some of the data and methods.” At the time, The Royal Veterinary College at the University of London was conducting an investigation into the research.
It’s always amusing to see how far a journal will bend over backward to avoid coming out and calling something “plagiarism.”
We’ve got two notices for you that exemplify the phenomenon, which we discussed in our Lab Times column last year.
The first, an article about apartheid, was presented at a student conference and published in the Polyvocia: The SOAS Journal of Graduate Research. It was later retracted because the author “should have used quotation marks around material written verbatim from that source.”
A paper in Immnunity has been retracted after two separate panels determined some of the figures “inappropriately presented” the data but cleared the team of wrongdoing.
However, the original data are now unavailable, according to the notice, so there’s no way to know if the paper’s conclusions are sound.
A panel reviewing The BMJ‘s handling of two controversial statin papers said the journal didn’t err when it corrected, rather than retracted, the articles.
The articles — a research paper and a commentary — suggested that use of statins in people at low risk for cardiovascular disease could be doing far more harm than good. Both articles inaccurately cited a study that provided data important to their conclusions — an error pointed out vigorously by a British researcher, Rory Collins, who demanded that the journal pull the pieces.
From Larry Summers to James Watson, certain scientists have a long and questionable tradition of using “data” to make claims about intelligence and aptitude.
So it’s no surprise that, when well-known computer scientist Richard Bornat claimed his PhD student had created a test to separate people who would succeed at programming versus those who didn’t, people happily embraced it. After all, it’s much easier to say there’s a large population that will just never get it, instead of re-examining your teaching methods.
The paper, called “The camel has two humps,” suggested instead of a bell curve, programming success rates look more like a two-humped ungulate: the kids who get it, and the kids who never will.
The authors of a 2012 paper in the journal Interface have had the journal issue an expression of concern about it after issues with “some of the data and methods” came to light.
Benjamin Barré, a genetics researcher who recently set up his own group at the University of Angers, is retracting four papers he worked on as a graduate student and postdoc.