Did you recently log onto your favorite journal’s website and see this? (For anyone who doesn’t want to bother clicking, it’s the video from Rick Astley’s “Never Gonna Give You Up.”) If so, your favorite journal was hijacked.
We’ve stumbled upon a trio of retractions published in August, 2013 from BMJ Case Reports for “redundant publication” to a group of researchers based in India.
Editors found that the reports, which were published between 2012 and 2013, had considerable “overlaps” with articles that had been published in other journals. Although one of the retracted authors was also an author on one of the overlapping articles, the rest of the authors have no obvious connection to the previous work.
The authors of the three retracted papers are based at the Modern Dental College and Research Centre in India.
The investigation, by the Istituto Nazionale per la Ricerca sul Cancro, found that there were multiple “figure anomalies.” According to the note:
An explanation of inadvertent error was given for some of the issues identified, while for two issues, a satisfactory explanation could not be provided.
First author Roberto Gherzi says none of his co-authors helped prepare the figures. The authors maintain that the conclusions are unaffected, but that assurance wasn’t enough for the journal. Here’s more from the lengthy retraction note, which provides some backstory on the “serious concerns” regarding the data:
A highly cited cancer researcher at MD Anderson has notched three major corrections, all associated with problems in figures. One note cites “human error” as the cause.
A case report that detailed the removal of a cyst from the side of a young woman’s face has been retracted for plagiarizing text from a similar case report published two years earlier.
When two papers include the same images of rat hearts, one of those papers gets retracted.
The papers share a corresponding author, Zhi-Qing Zhao of Mercer University School of Medicine in Savannah, Georgia. This marks his third retraction; we reported on two others earlier this year.
This accepted manuscript has been retracted because the journal is unable to verify reviewer identities.
Sounds like another case of faked emails to generate fake peer reviews, right? But that’s not what happened to this paper, according to the editor in chief of Antimicrobial Agents and Chemotherapy, Louis B. Rice, a professor at Brown University:
Science is fixing images in a paper published online in April that discovered an immune-boosting protein, after the authors mistakenly mixed up similar-looking Western blots.
This is exciting because we have found a completely different way to use the immune system to fight cancer.
The editor in chief of Science,Marcia McNutt, told us that the journal contacted the authors once it learned of “irregularities” in some of the figures, which did not affect the conclusions of the paper:
We’re pleased to present a guest post from Michèle B. Nuijten, a PhD student at Tilburg University who helped develop a program called “statcheck,” which automatically spots statistical mistakes in psychology papers, making it significantly easier to find flaws. Nuijten writes about how such a program came about, and its implications for other fields.
Readers of Retraction Watch know that the literature contains way too many errors – to a great extent, as some research suggests, in my field of psychology. And there is evidence that problem is only likely to get worse.
To reliably investigate these claims, we wanted to study reporting inconsistencies at a large scale. However, extracting statistical results from papers and recalculating the p-values is not only very tedious, it also takes a LOT of time.
So we created a program known as “statcheck” to do the checking for us, by automatically extracting statistics from papers and recalculating p-values. Unfortunately, we recently found that our suspicions were correct: Half of the papers in psychology contain at least one statistical reporting inconsistency, and one in eight papers contain an inconsistency that might have affected the statistical conclusion.
A group of computer scientists has a pair of retractions for duplicating “substantial parts” of other articles written by different authors. Both papers, published in Neural Computing and Applications, are on ways to screen for breast cancer more effectively.
According to the abstract of “An improved data mining technique for classification and detection of breast cancer from mammograms,” computers make the process of identifying cancer in lesions detected by mammograms faster and more accurate:
Although general rules for the differentiation between benign and malignant breast lesion exist, only 15–30% of masses referred for surgical biopsy are actually malignant. Physician experience of detecting breast cancer can be assisted by using some computerized feature extraction and classification algorithms. Computer-aided classification system was used to help in diagnosing abnormalities faster than traditional screening program without the drawback attribute to human factors.
The article has been cited four times, according to Thomson Scientific’s Web of Knowledge. The retraction note reveals where “substantial parts” of the article came from: