At a time when you can set up a Google alert to find out when your name appears anywhere on the Web — not that we’d know, of course — it puzzles us that some researchers are trying to get away with using others’ names on papers without their knowledge.
But they’re not just trying. Our recent experience suggests they’re actually getting away with it and seeing those papers in print. We’ve found at least six cases of that in the past few months. Of course, some eventually get caught.
That’s the world’s most harvested crab species and a particular favorite in Asia. But don’t confuse it with the Chesapeake Bay blue crab, Callinectes sapidus, of William Warner’s Pulitzer Prize-winning book, Beautiful Swimmers.
The problem with such elemental tracers, it seems, is that crabs moult repeatedly, shedding their shells, along with the elements that build up inside them. According to researchers from the Research Institute for Advanced Science and Technology and the Osaka Prefectural Fisheries Experimental Station, however, a volatile form of cobalt, previously undetected, could be a suitable element for tracing the growth of both crabs and prawns — another important aquaculture species in Asia — over time.
In her CSE presentation, she discusses what editors can and can’t do to ferret out fraud. Make sure to read through to the end, where she discusses a study of how journal editors are much more likely to think that fraudulent results are appearing in other journals. (Hint: If you’re right that it’s happening in someone else’s journal, and the editor of that journal thinks it’s happening in yours, well…)
have recently discovered that the cell lines used in their paper were inadvertently misidentified. The cell lines utilized in the paper have now been found to contain the bcr/abl translocation and most likely represent the K562 CML cell line, instead of MMS1 and RPM1 myeloma cell lines. Due to this issue, the relevance of the findings to myeloma and thus, the conclusions of the paper, are not supported by the data. The authors apologize to the readers, reviewers, and editors of Blood for publishing these erroneous data.
That seems straightforward enough, and we couldn’t find any evidence that this problem affected other publications.
For those Retraction Watch readers who have been following the case of Anil Potti — who has now retracted four papers — Keith Baggerly’s name will likely be familiar. Baggerly is the bioinformatician at M.D. Anderson in Houston who has been publicly questioning, in letters, papers, and The Cancer Letter, work by Potti et. al.
Yesterday, Baggerly gave a keynote at the Council of Science Editors meeting in Baltimore. It was a fascinating — and riveting — walk through how, after a group at M.D. Anderson asked him and his team to evaluate the Potti group’s tools for predicting whether given patients would respond to different chemotherapies, Baggerly’s group unraveled the Potti research.
In his talk, Baggerly demonstrated all of the mislabeling and other easily recognized errors his team found when they sifted through the raw data. And yet there were a number of wince-inducing moments in which Baggerly described the cool reception he had from several journals.
There have been a lot of calls recently that journals should require that authors deposit their data. There’s none more powerful than when they come at the end of a talk showing how that could have stopped a faulty clinical trial from ever starting.
Baggerly told Retraction Watch he just wants this story to get the widest attention possible, so he was glad to allow us to post his slides. They get appropriately technical, given the crowd, but it’s worth it. You can follow a very unofficial and rough transcript at this Twitter search, since Ivan live-tweeted the talk. Or just click over to the slideshow here.
The British Journal of Ophthalmology has retracted a 2006 paper which purported to show a link between drugs for erectile dysfunction and a rare form of sudden vision loss called non-arteritic anterior ischaemic optic neuropathy, more commonly known as “Viagra blindness.”
That wouldn’t be terribly interesting, except for this: One of the authors of the paper, a researcher at the University of Alabama named Gerald McGwin Jr., told us that the journal retracted the article because it had become a tool in a lawsuit involving Pfizer, which makes Viagra, and, presumably, men who’d developed blindness after taking the drug:
The article just became too much of a pain in the rear end. It became one of those things where we couldn’t provide all the relevant documentation [to the university, which had to provide records for attorneys]
Ultimately, however, McGwin said that the BJO pulled the plug on the paper.
It was really the journal’s decision to take it out of the literature.
Misconduct happens. So what can journal editors do find and prevent it?
While we don’t claim to be experts in working on the other side of the fence — eg as editors — Ivan was flattered to be asked by session organizers at the Council of Science Editors to appear on a panel on the subject. He was joined on the panel by:
Their presentations were chock-full of good tips and data. Bradford, for example, said that Science had published 45 retractions since 1997. And Laine recommended copying all of a manuscript’s authors on every communication, which could help prevent author forgery that seems to be creeping into the literature.
The debate — in entrenched medical circles, anyway — over whether it’s safe to give birth at home can be fierce. Just last month, for example, Naturereported that a review of the subject in the American Journal of Obstetrics & Gynecology that found home births more dangerous than those in the hospital generated so much controversy that it forced an investigation. Outside reviewers found problems, but the journal didn’t think they rose to the level of a retraction. Critics disagreed.
The same fraught subject came up in a paper published in Feminism & Psychology last year by Mary Horton-Salway and Abigail Locke. The original paper had concluded:
Our analysis suggests that the normativity of medical interventions in labour and childbirth is discursively reproduced in ante-natal classes whilst parental choice is limited by a powerful ‘rhetoric of risk’.