Additional lab tests, creating a clinical trial patient registry, and rewards for honesty are among the advice doled out in this week’s issue of the New England Journal of Medicine for researchers to help avoid the major issue of participants lying to get into clinical trials.
Justus Liebig University in Germany has been investigating concerns that Joachim Boldt, number two on the Retraction Watch Leaderboard and now up to 92 retractions, may have “manipulated” more data than previously believed.
Until now, the vast majority of Boldt’s retractions were thought to have involved inadequate ethics approval. However, new retraction notices for Boldt’s research suggest that there’s evidence the researcher also engaged in significant data manipulation.
The first retraction from the university investigation emerged last year. Two of three new notices cite the investigation specifically, and an informant at the university told us that there are more retractions to come.
Here are the retracted papers that are freshly on the record, starting with an August retraction for a 1991 Anesthesiology paper (cited 37 times, according to Thomson Scientific’s Web of Knowledge):
More than one third — 35% — of the world’s top-ranked science journals that responded to a survey don’t have a retraction policy, according to a new study. And that’s a dramatic improvement over findings of a similar study a little more than a decade ago.
As we’ve written previously, John Carlisle, an anesthesiologist in the United Kingdom, analyzed nearly 170 papers by Fujii and found aspects of the reported data to be astronomically improbable. It turns out, however, that he made a mistake that, while not fatal to his initial conclusions, required fixing in a follow-up paper, titled “Calculating the probability of random sampling for continuous variables in submitted or published randomised controlled trials,” also published in Anaesthesia.
In a paper that might be filed under “careful what you wish for,” a group of psychology researchers is warning that the push to replicate more research — the focus of a lot of attention recently — won’t do enough to improve the scientific literature. And in fact, it could actually worsen some problems — namely, the bias towards positive findings.
Last month, the community was shaken when a major study on gay marriage in Science was retracted following questions on its funding, data, and methodology. The senior author, Donald Green, made it clear he was not privy to many details of the paper — which raised some questions for C. K. Gunsalus, director of the National Center for Professional and Research Ethics, and Drummond Rennie, a former deputy editor at JAMA. We are pleased to present their guest post, about how co-authors can carry out their responsibilities to each other and the community.
C. K. Gunsalus
Just about everyone understands that even careful and meticulous people can be taken in by a smart, committed liar. What’s harder to understand is when a professional is fooled by lies that would have been prevented or caught by adhering to community norms and honoring one’s role and responsibilities in the scientific ecosystem.
Take the recent, sad controversy surrounding the now-retracted gay marriage study. We were struck by comments in the press by the co-author, Donald P. Green, on why he had not seen the primary data in his collaboration with first author Michael LaCour, nor known anything substantive about its funding. Green is the more senior scholar of the pair, the one with the established name whose participation helped provide credibility to the endeavor.
The New York Timesquoted Green on May 25 as saying: “It’s a very delicate situation when a senior scientist makes a move to look at a junior scientist’s data set.”