The secretary general of the Nobel Assembly, the body responsible for choosing the Nobel Prizes, has resigned from his post because “he may be involved” in the Karolinska Institutet investigation of trachea surgeon Paolo Macchiarini.
An investigation at the University of New South Wales in Australia has led to a fifth retraction for a cancer researcher long accused of misconduct, due to “unresolvable concerns” with some images.
As we reported in December, UNSW cleared Levon Khachigian of misconduct, concluding that his previous issues stemmed from “genuine error or honest oversight.” Now, Circulation Research is retracting one of his papers after an investigation commissioned by UNSW was unable to find electronic records for two similar images from a 2009 paper, nor records of the images in original lab books.
Again, the retraction note affirms that this is not a sign of misconduct:
UNSW has not attributed any instance of research misconduct or responsibility for the unavailability of the original data to Professor Khachigian or to any of the authors of the publication.
For all our talk about finding new ways to measure scientists’ achievements, we’ve yet to veer away from a focus on publishing high-impact papers. This sets up a system of perverse incentives, fueling ongoing problems with reproducibility and misconduct. Is there another way? Sarah Greene, founder of Rapid Science, believes there is – and we’re pleased to present her guest post describing a new way to rate researchers’ collaborative projects.
In science, we still live – and die – by the published paper. Discontent with the Impact Factor, the H-Index, and other measures of article citation in scholarly journals has led to altmetrics’ quantification of an article’s impact in social media outlets—e.g., Twitter, blogs, newspapers, magazines, and bookmarking systems. But discussions regarding alternative reward systems do not generally swerve from this genuflecting of the scientific paper. Consequently, publishing big, “positive” findings in high-impact journals (and now in social media) is the researcher’s Holy grail.
The Karolinska Institutet University Board announced today it was issuing a new external investigation of trachea surgeon Paolo Macchiarini, looking into questions about his recruitment and the handling of previous allegations of misconduct.
The University Board deems such an inquiry to be an important part of restoring the confidence of the public, the scientific community, staff and students in the university.
The board hopes to appoint the investigative team, which will not consider “matters of a medical-scientific nature,” next week. The goal is to conclude the investigation by the summer.
An engineer has retracted three papers on a method for making nanoscale materials that are useful in solar cells.
The papers, all published in ACS Applied Materials & Interfaces, contain irregularities in data, and one includes images “which have been published elsewhere and identified with different samples,” according to the note.
The first author on all three papers is Khalid Mahmood, who — according to the bio from a talk he gave last year on efficient solar cells — is currently a postdoc at King Abdullah University of Science and Technology in Saudi Arabia. He did the work in the retracted papers while a student at the Korea Advanced Institute of Science and Technology, where, according to the bio, he completed his PhD in two years.
Karolinska Institutet announced today it would not extend the contract of star surgeon Paolo Macchiarini. He has been instructed to “phase out” his research from now until November 30.
If audits work for the Internal Revenue Service, could they also work for science? We’re pleased to present a guest post from Viraj Mane, a life sciences commercialization manager in Toronto, and Amy Lossie at the National Institutes of Health, who have a unique proposal for how to improve the quality of papers: Random audits of manuscripts.
Skim articles, books, documentaries, or movies about Steve Jobs and you’ll see that ruthlessness is the sine qua non of some of our greatest business leaders. It would be naïve to assume that scientists somehow resist these universal impulses toward leadership, competition, and recognition. In the white-hot field of stem cell therapy, where promising discoveries attract millions of dollars, egregious lapses in judgment and honesty have been uncovered in Japan, Germany, and South Korea. The nature of the offenses ranged from fraudulent (plagiarism and duplication of figures) to horrifying (female subordinates coerced into donating their eggs).
When a researcher embraces deception, the consequences extend well beyond the involved parties. Former physician Andrew Wakefield published a linkage between MMR vaccines and autism with overtly substandard statistical and experimental methods, while hiding how his financial compensation was tied to the very hysteria he helped unleash.
If you notice an obvious problem with a paper in your field, it should be relatively easy to alert the journal’s readers to the issue, right? Unfortunately, for a group of nutrition researchers led by David B. Allison at the University of Alabama at Birmingham, that is not their experience. Allison and his co-author Andrew Brown talked to us about a commentary they’ve published in today’s Nature, which describes the barriers they encountered to correcting the record.
Retraction Watch: You were focusing on your field (nutrition), and after finding dozens of “substantial or invalidating errors,” you had to stop writing letters to the authors or journals, simply because you didn’t have time to keep up with it all. Do you expect the same amount of significant errors are present in papers from other fields?Continue reading Want to correct the scientific literature? Good luck