Should science put up with sloppiness?

labtimes513That’s the question we pose in our newest column in LabTimes, based on some recent cases we’ve covered:

The implication seems to be that as long as researchers can pass off their mistakes as sloppiness, rather than intentional misconduct, they should be forgiven and carry on their work. We’re with that logic, to a point; after all, we’ve argued before that due process is much too important, no matter how apparently damning the evidence is. And as long as corrections and retraction notices are detailed, telling the whole story, science and the public are served. Continue reading Should science put up with sloppiness?

What happened to Joachim Boldt’s 88 papers that were supposed to be retracted?

a&amisconductcoverCHICAGO — Almost two years after editors at 18 journals agreed in March 2011 to retract 88 of former retraction record holder Joachim Boldt’s papers, 10% of them hadn’t been retracted.

That’s what Nadia Elia, Liz Wager, and Martin Tramer reported here Sunday in an abstract at the Seventh International Congress on Peer Review and Biomedical Publication. Elia and Tramer are editors at the European Journal of Anaesthesiology, while Wager is former chair of the Committee on Publication Ethics (COPE).

As of January 2013, nine of the papers hadn’t been retracted, Tramer said, while only five — all in one journal — had completely followed COPE guidelines, with adequate retraction notices, made freely available, along with  PDFs properly marked “Retracted.” From the abstract (see page 18): Continue reading What happened to Joachim Boldt’s 88 papers that were supposed to be retracted?

Are US behavioral science researchers more likely to exaggerate their results?

Daniele Fanelli
Daniele Fanelli

When Retraction Watch readers think of problematic psychology research, their minds might naturally turn to Diederik Stapel, who now has 54 retractions under his belt. Dirk Smeesters might also tickle the neurons.

But a look at our psychology category shows that psychology retractions are an international phenomenon. (Remember Marc Hauser?) And a new paper in the Proceedings of the National Academy of Sciences (PNAS) suggests that it’s behavioral science researchers in the U.S. who are more likely to exaggerate or cherry-pick their findings.

For the new paper, Daniele Fanelli — whose 2009 paper in PLoS ONE contains some of the best data on the prevalence of misconduct — teamed up with John Ioannidis, well known for his work on “why most published research findings are false.” They looked at Continue reading Are US behavioral science researchers more likely to exaggerate their results?

“Why Has the Number of Scientific Retractions Increased?” New study tries to answer

plos oneThe title of this post is the title of a new study in PLOS ONE by three researchers whose names Retraction Watch readers may find familiar: Grant Steen, Arturo Casadevall, and Ferric Fang. Together and separately, they’ve examined retraction trends in a number of papers we’ve covered.

Their new paper tries to answer a question we’re almost always asked as a follow-up to data showing the number of retractions grew ten-fold over the first decade in the 21st century. As the authors write: Continue reading “Why Has the Number of Scientific Retractions Increased?” New study tries to answer

How well do journals publicize retractions?

bmc research notesA new paper in BMC Research Notes looks at the retraction class of 2008, and finds journals’ handling of them less than optimal.

Evelynne Decullier and colleagues — including Hervé Maisonneuve, who was helpful to us for a recent post — found: Continue reading How well do journals publicize retractions?

Half of researchers have reported trouble reproducing published findings: MD Anderson survey

plosoneReaders of this blog — and anyone who has been following the Anil Potti saga — know that MD Anderson Cancer Center was the source of initial concerns about the reproducibility of the studies Potti, and his supervisor, Joseph Nevins, were publishing in high profile journals. So the Houston institution has a rep for dealing in issues of data quality. (We can say that with a straight face even though one MD Anderson researcher, Bharat Aggarwal, has threatened to sue us for reporting on an institutional investigation into his work, and several corrections, withdrawals, and Expressions of Concern.)

We think, therefore, that it’s worth paying attention to a new study in PLOS ONE, “A Survey on Data Reproducibility in Cancer Research Provides Insights into Our Limited Ability to Translate Findings from the Laboratory to the Clinic,” by a group of MD Anderson researchers. They found that about half of scientists at the prominent cancer hospital report being unable to reproduce data in at least one previously published study. The number approaches 60% for faculty members: Continue reading Half of researchers have reported trouble reproducing published findings: MD Anderson survey

“Bird vocalizations” and other best-ever plagiarism excuses: A wrap-up of the 3rd World Conference on Research Integrity

What are the best excuses you’ve seen for plagiarism? James Kroll, at the National Science Foundation’s Office of Inspector General, has collected a bunch over the years (click on the image to enlarge): Continue reading “Bird vocalizations” and other best-ever plagiarism excuses: A wrap-up of the 3rd World Conference on Research Integrity

Come see Retraction Watch in Calgary, Fort Collins, Montreal, New York, and Seattle

logoThe next several weeks are shaping up as busy ones for Retraction Watch, as we make appearances in three cities: Continue reading Come see Retraction Watch in Calgary, Fort Collins, Montreal, New York, and Seattle

Who deserves to be an author on a scientific paper?

labtimes 2-2013Although authorship issues are not the most common reason we see for retractions, they’re one of the most vexing. We’ve seen multiple cases in which papers are retracted because colleagues say authors didn’t have a right to publish data, for example. In other cases, authors who didn’t know about a paper are surprised when it comes out.

So for our most recent column in LabTimes, we decided to look at these situations and try to answer some questions: Continue reading Who deserves to be an author on a scientific paper?

One in twelve Belgian medical scientists admits having “made up and/or massaged data”: Survey

001_coverEOSA recently released survey of Belgian scientists suggests that Flemish medical researchers admit to having made up or massaged data more often than their counterparts around their world.

The survey, by the Dutch science magazine Eos with the help of Joeri Tijdink, of VU University Medical Center in Amsterdam, and the Pascal Decroos Fund for Investigative Journalism, found that Continue reading One in twelve Belgian medical scientists admits having “made up and/or massaged data”: Survey