Anders Hamsten announced he would be resigning as vice-chancellor from Karolinska Institutet (KI) in the early hours of Saturday, February 13.
In a press release we received at 12:16 a.m. local time in Stockholm, Hamsten issued the following statement:
Following the criticism on the so called Macchiarini affair at KI I conclude it will be hard for me to serve as Vice-Chancellor with the strength and credibility this university needs. I will therefore leave office.
The media has been abuzz in the last few weeks with developments in the ongoing story about “super surgeon” Paolo Macchiarini. We’ve been covering the allegations against him for years (and invited him to publish a guest post on our site). Below, we present a timeline of recent events, to keep you abreast of what we know so far.
Macchiarini was famous long before accusations of misconduct arose, once-heralded for creating tracheas from cadavers and patients’ own stem cells. However, the glow of his success was diminished somewhat after some Karolinska Institutet (KI) surgeons filed a complaint in 2014 — alleging, for instance, Macchiarini had downplayed the risks of the procedure and not obtained proper consent. In response, KI issued an external review by Bengt Gerdin of Uppsala University.
Why do so many PhD students publishing their medical theses in German resort to brazen plagiarism, even copying from people in their own research groups? We’re pleased to present a Q&A with Debora Weber-Wulff, based at the University of Applied Sciences HTW Berlin in Germany. She recently published a case study for the Council of Europe that shows a surprisingly high number of cases of plagiarism in medical PhD theses submitted to German universities, as well as a few in other European countries. Weber-Wulff is a member of the VroniPlag Wiki, a group of German-language scientists who have been scanning for — and publicly tracking — cases of plagiarism. They’ve published documentations on more than 155 cases so far, and begun investigations on over 200 more, including some very high-profile cases. We talked to Weber-Wulff about why plagiarism is such a problem in German medical PhD programs.
Stefan Franzen doesn’t give up. Ten years ago, he began to suspect the data behind his colleagues’ research about using RNA to make palladium nanoparticles, a potentially valuable tool that ended up as a Science paper. Recently, the National Science Foundation (NSF) decided to cut off funding for Bruce Eaton and Dan Feldheim — currently at the University of Colorado at Boulder — and last week, Science retracted the paper. We talked to Franzen, based at North Carolina State University (NCSU), about his decade-long efforts, and how it feels to be finally vindicated.
Retraction Watch: How did you first begin to suspect the findings by Eaton and Feldheim?
Stefan Franzen: Starting in early 2005, I was collaborating with Drs. Eaton and Feldheim at NCSU, thanks to two joint grants from the W.M. Keck Foundation and NSF. During a group meeting in December of 2005, a graduate student showed electron microscopy data that were inconsistent with the assignment of the particles as palladium. Over time, we kept producing more data that called their findings into question; in April 2006, a postdoc showed that the hexagonal particles could be obtained without RNA. By then, I could see that there was a significant discrepancy between what was written in the articles and what was done and observed in the laboratory.
Would designating a set of authors as responsible for data production – separate from those who conduct the analysis – help boost the reliability of papers? That’s a question raised by the editor of the New England Journal of Medicine, Jeffrey Drazen. Along with many other editors of top medical journals, Drazen recently signed a proposal by the International Committee of Medical Journal Editors to require authors of clinical trials to share anonymous patient data within six months of publication. He talked to us about another way to make trials more robust: Create “data authors.”
For all our talk about finding new ways to measure scientists’ achievements, we’ve yet to veer away from a focus on publishing high-impact papers. This sets up a system of perverse incentives, fueling ongoing problems with reproducibility and misconduct. Is there another way? Sarah Greene, founder of Rapid Science, believes there is – and we’re pleased to present her guest post describing a new way to rate researchers’ collaborative projects.
In science, we still live – and die – by the published paper. Discontent with the Impact Factor, the H-Index, and other measures of article citation in scholarly journals has led to altmetrics’ quantification of an article’s impact in social media outlets—e.g., Twitter, blogs, newspapers, magazines, and bookmarking systems. But discussions regarding alternative reward systems do not generally swerve from this genuflecting of the scientific paper. Consequently, publishing big, “positive” findings in high-impact journals (and now in social media) is the researcher’s Holy grail.
The Karolinska Institutet University Board announced today it was issuing a new external investigation of trachea surgeon Paolo Macchiarini, looking into questions about his recruitment and the handling of previous allegations of misconduct.
The University Board deems such an inquiry to be an important part of restoring the confidence of the public, the scientific community, staff and students in the university.
The board hopes to appoint the investigative team, which will not consider “matters of a medical-scientific nature,” next week. The goal is to conclude the investigation by the summer.
Karolinska Institutet announced today it would not extend the contract of star surgeon Paolo Macchiarini. He has been instructed to “phase out” his research from now until November 30.
If audits work for the Internal Revenue Service, could they also work for science? We’re pleased to present a guest post from Viraj Mane, a life sciences commercialization manager in Toronto, and Amy Lossie at the National Institutes of Health, who have a unique proposal for how to improve the quality of papers: Random audits of manuscripts.
Skim articles, books, documentaries, or movies about Steve Jobs and you’ll see that ruthlessness is the sine qua non of some of our greatest business leaders. It would be naïve to assume that scientists somehow resist these universal impulses toward leadership, competition, and recognition. In the white-hot field of stem cell therapy, where promising discoveries attract millions of dollars, egregious lapses in judgment and honesty have been uncovered in Japan, Germany, and South Korea. The nature of the offenses ranged from fraudulent (plagiarism and duplication of figures) to horrifying (female subordinates coerced into donating their eggs).
When a researcher embraces deception, the consequences extend well beyond the involved parties. Former physician Andrew Wakefield published a linkage between MMR vaccines and autism with overtly substandard statistical and experimental methods, while hiding how his financial compensation was tied to the very hysteria he helped unleash.
If you notice an obvious problem with a paper in your field, it should be relatively easy to alert the journal’s readers to the issue, right? Unfortunately, for a group of nutrition researchers led by David B. Allison at the University of Alabama at Birmingham, that is not their experience. Allison and his co-author Andrew Brown talked to us about a commentary they’ve published in today’s Nature, which describes the barriers they encountered to correcting the record.
Retraction Watch: You were focusing on your field (nutrition), and after finding dozens of “substantial or invalidating errors,” you had to stop writing letters to the authors or journals, simply because you didn’t have time to keep up with it all. Do you expect the same amount of significant errors are present in papers from other fields?Continue reading Want to correct the scientific literature? Good luck