We’ve always like to highlight cases in which scientists do the right thing and retract problematic papers themselves, rather than being forced to by editors and publishers. Apparently, according to a new paper by economists and management scholars, scientists reward that sort of behavior, too.
The study by Benjamin Jones of the Kellogg School of Management at Northwestern University and the National Bureau of Economic Research and colleagues, “The Retraction Penalty: Evidence from the Web of Science,” was published yesterday in Scientific Reports, a Nature Publishing Group title.
The Journal of the American Chemical Society has retracted a 2009 paper on ethylene polymerization after the authors said they were unable to replicate their findings.
The article, “Bimetallic Effects for Enhanced Polar Comonomer Enchainment Selectivity in Catalytic Ethylene Polymerization,”came from the lab of Tobin Marks, a highly decorated — and grant-and-royalty-generating — chemist at Northwestern University. Continue reading JACS retracts polymer paper over data concerns
Here’s a retraction that leaves us itching to know more:
The authors of a recent paper in the European Journal of Preventive Cardiology on nut intake and the risk of high blood pressure and diabetes have pulled their article from publication for an undisclosed conflict of interest.
Now, you wouldn’t know this unless you were willing to pony up the $32 to read the notice, which is behind a pay wall — something that drives us, well, nuts. But here it is:
Two American College of Cardiology conference abstracts published earlier this year in the Journal of the American College of Cardiology (JACC) have been retracted, one because the authors were actually measuring something other than what they reported, and the other because newer software invalidated the results.
Here’s the notice for “Worsening of Pre-Existing Valvulopathy With A New Obesity Drug Lorcaserin, A Selective 5-Hydroxytryptamine 2C Receptor Agonist: A Meta-Analysis of Randomized Controlled Trials” by Hemang B. Panchal, Parthav Patel, Brijal Patel, Rakeshkumar Patel, and Henry Philip of East Tennessee State University: Continue reading Two detailed retraction notices correct the cardiology record
Last week, we reported that some of the authors of a 2010 paper in the BMJ claiming to have identified Henry IV’s head thought the study should be retracted based on new evidence. Some of the other authors have now responded to that call for retraction, which appeared on the BMJ’s site alongside the paper.
With apologies to Dana Carvey, Bioorganic & Medicinal Chemistry Letters has chopped a 2012 paper on the molecular constituents of broccoli florets after readers evidently were forced to do the job of reviewers and point out fatal flaws in the study.
The article, “Two novel bioactive glucosinolates from Broccoli (Brassica oleracea L. var. italica) florets,” came from a group in South Korea and has yet to be cited, according to Thomson Scientific’s Web of Knowledge. But according to the retraction notice, after publication critics pointed out serious problems with the work. To wit: Continue reading Chopping broccoli: Researchers lose paper on florets after readers raise questions
The Journal of Biological Chemistry has a fairly gory correction — we’d call it a mega-correction — for a 2010 paper by Levon Khachigian, an Australian researcher whose studies of a new drug for skin cancer recently were halted over concerns about possible misconduct, including image manipulation. As we reported earlier this year, Khachigian has already lost four papers, including one in the JBC — which the journal simply noted had “been withdrawn by the authors.”
Recently I heard a graduate student was told by their advisor, ‘Don’t do a t-test, it’s not publishable.’ This seems ridiculous to me as the t-test is a robust test to aid in answering a hypothesis. So my question is: is a t-test no longer publishable? And if so, is this true for higher tiered journals, or all peer-reviewed journals?
I would very much appreciate hearing the opinions of your readers on this issue – do they feel they need to run more ‘elaborate’ statistics (e.g., multivariate, modeling, etc.) in order for their research to be publishable? And if so, do researchers knowingly violate the assumptions of these more elaborate statistical tests so they can be ‘publishable’?