Archive for the ‘the netherlands’ Category
A few months ago, a researcher told Evelien Oostdijk there might be a problem with a 2014 JAMA study she had co-authored.
The study had compared two methods of preventing infection in the intensive care unit (ICU). But a separate analysis had produced different results.
Oostdijk, from the University Medical Center Utrecht in The Netherlands, immediately got to work to try to figure out what was going on. And she soon discovered the problem: The coding for the two interventions had been reversed at one of the 16 ICUs. This switch had “a major impact on the study outcome,” last author Marc Bonten, also from the University Medical Center Utrecht, wrote in a blog post about the experience yesterday, because it occurred at “one of the largest participating ICUs.”
When Oostdijk and a researcher not involved in the study analyzed the data again, they discovered a notable difference between the revised and original findings: The new analysis revealed that one of the interventions had a small but significant survival benefit over the other.
Oostdijk and Bonten, who supervised the re-analysis, notified their colleagues of the revised study outcomes and contacted the journal requesting a retraction and replacement, which was published yesterday in JAMA.
According to the notice of retraction and replacement:
By now, most of our readers are aware that some fields of science have a reproducibility problem. Part of the problem, some argue, is the publishing community’s bias toward dramatic findings — namely, studies that show something has an effect on something else are more likely to be published than studies that don’t.
Many have argued that scientists publish such data because that’s what is rewarded — by journals and, indirectly, by funders and employers, who judge a scientist based on his or her publication record. But a new meta-analysis in PNAS is saying it’s a bit more complicated than that.
In a paper released today, researchers led by Daniele Fanelli and John Ioannidis — both at Stanford University — suggest that the so-called “pressure-to-publish” does not appear to bias studies toward larger so-called “effect sizes.” Instead, the researchers argue that other factors were a bigger source of bias than the pressure-to-publish, namely the use of small sample sizes (which could contain a skewed sample that shows stronger effects), and relegating studies with smaller effects to the “gray literature,” such as conference proceedings, PhD theses, and other less publicized formats.
However, Ferric Fang of the University of Washington — who did not participate in the study — approached the findings with some caution:
Journals have flagged two papers by prominent social psychologist Jens Förster — whose work has been subject to much scrutiny — over concerns regarding the validity of the data.
Förster already has three retractions, following an investigation by his former employer, the University of Amsterdam (UvA) in the Netherlands. In 2014, we reported on the first retraction for Förster for one of three studies with odd patterns that were flagged by the UvA investigation, a 2012 paper in Social Psychological and Personality Science; subsequently, the Netherlands Board on Research Integrity concluded that data had been manipulated. Three statistical experts from the UvA then carried out a more in-depth analysis of 24 publications by Förster, and found eight to have “strong evidence for low scientific veracity.”
Last year, Förster agreed to retract two more papers as part of a deal with the German Society for Psychology (DGPs); those retractions appeared earlier this year. All three papers that Förster has lost until now are from the “strong evidence for low scientific veracity” category. Recently, two more of Förster’s papers from the same category were flagged with notices, but not retracted.
Recently, we reported that social psychologist and renowned data faker Diederik Stapel had found himself a new gig supporting research at a vocational university in the Netherlands — but it appears that was short-lived.
According to multiple news reports, NHTV Breda will not be employing Stapel, after all.
Diederik Stapel, the social psychology researcher who has had 58 papers retracted after admitting that he made up the data, has a new job: helping other researchers.
The detection process uses the algorithm “statcheck” — which we’ve covered previously in a guest post by one of its co-developers — to scan just under 700,000 results from the large sample of psychology studies. Although the trends in Hartgerink’s present data are yet to be explored, his previous research suggests that around half of psychology papers have at least one statistical error, and one in eight have mistakes that affect their statistical conclusions. In the current effort, regardless of whether any mistakes are found, the results from the checks are then posted to PubPeer, and authors are alerted through an email.
Till now, the initiative is one of the biggest large-scale post-publication peer review efforts of its kind. Some researchers are, however, concerned about its current process of detecting potential mistakes, particularly the fact that potentially stigmatizing entries are created even if no errors are found. Read the rest of this entry »
The retraction for “Chewing ability in an adult Chinese population” appeared in Clinical Oral Investigations in 2012, but we’re sharing it with you now because the notice contains some remarkable language:
This article has been withdrawn due to wrong content with serious consequences such as danger to people’s health.
Last author Nico H.J. Creugers, who works at Radboud University Medical Center in the Netherlands, told us:
The Leiden University Medical Center (LUMC) has asked a journal to retract two papers after revealing a former employee manipulated data.
The report does not name the individual nor the journal, but notes that they work in a molecular field, and are currently employed by a university outside The Netherlands.
The editors of a journal that recently retracted a paper after the peer-review process was “compromised” have published the fake reviews, along with additional details about the case.
In the editorial titled “Organised crime against the academic peer review system,” Adam Cohen and other editors at the British Journal of Clinical Pharmacology say they missed “several fairly obvious clues that should have set alarm bells ringing.” For instance, the glowing reviews from supposed high-profile researchers at Ivy League institutions were returned within a few days, were riddled with grammar problems, and the authors had no previous publications.
The case is one of many we’ve recently seen in which papers are pulled due to actions of a third party.
The paper was submitted on August 5, 2015. From the beginning, the timing was suspect, Cohen — the director for the Centre for Human Drug Research in The Netherlands — and his colleagues note: Read the rest of this entry »
Karima Kourtit, a researcher at VU, has been at the receiving end of anonymous complaints to her institution accusing her of plagiarism and her professor, high-profile economist Peter Nijkamp, of duplication (i.e. self-plagiarism). Kourtit is now seeking to prosecute the unnamed source of the complaint for defamation; the VU told us it will no longer accept fully anonymous complaints.
The case began when VU cancelled Kourtit’s thesis defense for plagiarism, and a report published on the VSNU, the Association of Universities, accused Nijkamp of self-plagiarism. Two of Nijkamp’s papers have been retracted as a result of the investigation; Kourtit is an author on one of the retracted papers.
A VU spokesperson told us: