Jens Förster, a high-profile social psychologist, has agreed to retract multiple papers following an institutional investigation — but has also fought to keep some papers intact. Recently, one publisher agreed with his appeal, and announced it would not retract two of his papers, despite the recommendation of his former employer.
Many voices contributed to the discussion about these two papers — in November, 2016, the University of Amsterdam announced it was rejecting the appeal by another co-author on both papers, Nira Liberman, based at Tel Aviv University in Israel. The following month, Tel Aviv University announced that it believed the articles should not be retracted, based on its own internal review.
The APA reviewed the various recommendations, according to last month’s announcement:
A psychoanalyst has retracted an award-winning 2016 paper over concerns that it contained “sensitive” patient information.
On July 15, Judith L. Mitrani, a psychoanalyst based in California, published an article that included “sensitive clinical material” about a patient. Although we do not know what prompted the concerns, on November 21, Mitrani, in agreement with the journal’s editor-in-chief and publisher, retracted the article. The author and editor told us the retraction was meant to prevent non-experts from accessing the paper and to stop other non-Wiley sites from posting it.
The article was published after it had won the journal’s essay contest in 2015.
A leading psychology research society in Germany has called for the end of PubPeer postings based on a computer program that trawls through psychology papers detecting statistical errors, saying it is needlessly causing reputational damage to researchers.
Last month, we reported on an initiative that aimed to clean up the psychology literature by identifying statistical errors using the algorithm “statcheck.” As a result of the project, PubPeer was set to be flooded with more than 50,000 entries for the study’s sample papers — even when no errors were detected.
On October 20, the German Psychological Society (DGPs) issued a statement criticizing the effort, expressing concern that alleged statistical errors are posted on PubPeer before authors of original studies are contacted. The DGPs also claimed when mistakes that are detected by statcheck and posted on PubPeer turn out to be false positives, it still results in damage to researchers that is “no longer controllable,” as entries on PubPeer cannot be easily removed.
PubPeer will see a surge of more than 50,000 entries for psychology studies in the next few weeks as part of an initiative that aims to identify statistical mistakes in academic literature.
The detection process uses the algorithm “statcheck” — which we’ve covered previously in a guest post by one of its co-developers — to scan just under 700,000 results from the large sample of psychology studies. Although the trends in Hartgerink’s present data are yet to be explored, his previous research suggests that around half of psychology papers have at least one statistical error, and one in eight have mistakes that affect their statistical conclusions. In the current effort, regardless of whether any mistakes are found, the results from the checks are then posted to PubPeer, and authors are alerted through an email.
A communications journal has retracted parts of a paper about a famous German political scientist after her great-nephew threatened the journal with legal action, claiming bits of the paper were defamatory.
The original paper concluded that in “families who moved out of high-poverty neighborhoods, boys experienced an increase and girls a decrease in rates of depression and conduct disorder,” according to a press release issued by the journal along with the paper (which also got some press attention from Reuters). But part of that conclusion is wrong.
Scientific fraud isn’t what keeps Andrew Gelman, a professor of statistics at Columbia University in New York, up at night. Rather, it’s the sheer number of unreliable studies — uncorrected, unretracted — that have littered the literature. He tells us more, below.
Whatever the vast majority of retractions are, they’re a tiny fraction of the number of papers that are just wrong — by which I mean they present no good empirical evidence for their claims.
The Open Science Framework (OSF) has pulled a dataset from 70,000 users of the online dating site OkCupid over copyright concerns, according to the study author.
The release of the dataset generated concerns, by making personal information — including personality traits — publicly available.
Emil Kirkegaard, a master’s student at Aarhus University in Denmark, told us that the OSF removed the data from its site after OkCupid filed a claim under the Digital Millennium Copyright Act (DMCA), which requires the host of online content to remove it under certain conditions. Kirkegaard also submitted a paper based on this dataset to the journal he edits, Open Differential Psychology. But with the dataset no longer public, the fate of the paper is subject to “internal discussions,” he told us.
The journal Evolution has retracted a 2007 paper about the roles of the different sexes in searching for mates, after the same author critiqued the work in a later paper.
The case raises important questions about when retractions are appropriate, and whether they can have a chilling effect on scientific discourse. Although Hanna Kokko of the University of Zurich, Switzerland — who co-authored both papers — agreed that the academic literature needed to be corrected, she didn’t want to retract the earlier paper; the journal imposed that course of action, said Kokko.