A group of researchers in China has lost their paper on liver cancer after the first author admitted to duplication, also known, inelegantly, as self-plagiarism.
The paper, “Glycyrrhetinic acid-modified chitosan nanoparticles enhanced the effect of 5-fluorouracil in murine liver cancer model via regulatory T-cells,” appeared in the July 2013 issue of the Journal of Drug Design, Development and Therapy, a Dove Press title.
In what seems to be an example of researchers swiftly and transparently correcting the literature, and acknowledging errors, a pair of scientists have retracted a 2013 paper from Nature.
This one seems like an honest mistake: a paper on dietary supplements during pregnancy has been retracted based on an error in data recording.
In the BMC Pregnancy & Childbirth paper, “Folic acid supplementation, dietary folate intake during pregnancy and risk for spontaneous preterm delivery: a prospective observational cohort study,” women for whom the researchers had no data on folic acid supplementation were classified as taking no supplements. Despite the error, the authors claim the overall conclusion remains the same: taking folic acid supplements didn’t protect women from preterm deliveries.
After earning an erratum shortly after publication in 2009, a paper in Applied Physics Letters has now been retracted for the “regrettable mistake” of duplicating an earlier paper by the researchers.
After typing up 96 citations, researchers from the National Institute for Digestive Diseases, I.R.C.C.S. “S. de Bellis,” in Bari, Italy, apparently ran out of steam for the last five, earning themselves a retraction for plagiarism in a literature review of the effects of probiotics on intestinal cancer.
In the meantime, the plasma scientists withdrew their paper from consideration and submitted it to IEEE Transactions on Plasma Science, where it was published in February 2013. Unfortunately, in the four year delay between the conference and the Institute of Physics publication, the withdrawal request got lost.
From Larry Summers to James Watson, certain scientists have a long and questionable tradition of using “data” to make claims about intelligence and aptitude.
So it’s no surprise that, when well-known computer scientist Richard Bornat claimed his PhD student had created a test to separate people who would succeed at programming versus those who didn’t, people happily embraced it. After all, it’s much easier to say there’s a large population that will just never get it, instead of re-examining your teaching methods.
The paper, called “The camel has two humps,” suggested instead of a bell curve, programming success rates look more like a two-humped ungulate: the kids who get it, and the kids who never will.
The authors of a 2012 paper in the journal Interface have had the journal issue an expression of concern about it after issues with “some of the data and methods” came to light.
PLOS ONE has retracted a 2012 article by a group of breast cancer researchers after another scientist — a leading U.S. oncologist — objected that the data came from his lab.