Is peer review a good way to weed out problematic papers? And if it is, which kinds of peer review? In a new paper in Scientometrics, Willem Halffman, of Radboud University, and Serge Horbach, of Radboud University and Leiden University, used our database of retractions to try to find out. We asked them several questions about the new work.
On June 25, 2015, following an investigation into the work of a then-graduate student at University College Cork in Ireland, the senior author of a 2014 paper in PLOS ONE requested its retraction. The paper, said senior author Zubair Kabir in an email to Iratxe Puebla, the journal’s managing editor, was “fundamentally flawed.”
Puebla responded on July 1, saying she would contact the graduate student — Olurotimi Bankole Ajagbe, corresponding author of the paper — and get back to Kabir. A few more emails, including one on Aug. 26, 2015, in which Ajagbe also requested the retraction, resulted. On August 31, Puebla wrote to Ivan Perry, head of Cork’s department of public health, where Ajagbe had been working on his PhD, to say she would discuss the case with colleagues and follow up.
Are current classification systems for research misconduct adequate? Toshio Kuroki — special advisor to the Japan Society for the Promotion of Science and professor emeritus at the University of Tokyo and Gifu University — thinks the answer is no. In a new paper in Accountability in Research, Kuroki — who has published on research misconduct before — suggests a new classification system. We asked him a few questions about his proposal. The answers are lightly edited for clarity.
If you’ve been anywhere near Twitter this week, you have probably seen a paper from Scientific Reports that appears to contain a likeness of a certain U.S. president in a cartoon of baboon feces.
In 2015, Peter Yoachim became interested in how long astronomers remained active astronomers or, more to the point, how long they continued publishing in astronomy.
You’d think that if an author asked a journal to correct a modest mistake, the journal would oblige. After all, many researchers have to be dragged kicking and screaming to correct the record.
Two months after Harvard and the Brigham and Women’s Hospital said they were requesting the retraction of more than 30 papers from a former cardiac stem cell lab there, two American Heart Association journals have retracted more than a dozen papers from the lab.
Recently, we wrote in STAT about the “research integrity czars” that some journals are hiring to catch misconduct and errors. But are there other ways that journals could ensure the integrity of the scientific record? Tom Jefferson, a physician, methods researcher, and campaigner for open clinical trial data, has a suggestion, which he explores in this guest post. (Jefferson’s disclosures are here.)
Readers of Retraction Watch know that the quality control mechanisms in the publication of science, chiefly editorial peer review, are not infallible. Peer review in biomedicine in its current form and practice is the direct descendant of the bedside consultation. In a consultation the object or person under observation (patient/the journal submission) is observed and analyzed by the doctor (editor) who decides what the best course of action is. If unsure, the physician/editor may call on the help of outside specialists (the hospital physicians/referees) to help make a final decision on the therapy and fate of the patient/submission.