Archive for the ‘psychology’ Category
An unusual article that considered the concept of change from a systems perspective — including change in medicine, economics, and decision-making, for instance — has, well, changed from “published” to “retracted.”
After commenters on PubPeer called the 2014 paper “gibberish” and even suggested it might be computer-generated, Frontiers in Computational Neuroscience retracted it, noting it “does not meet the standards of editorial and scientific soundness” for the journal, according to the retraction notice. The paper’s editor and author maintain there was nothing wrong with the science in the paper.
High-profile social psychologist Jens Förster has earned two retractions following an investigation by his former workplace. He agreed to the retractions as part of a settlement with the German Society for Psychology (DGPs).
The papers are two of eight that were found to contain “strong statistical evidence for low veracity.” According to the report from an expert panel convened at the request of the board of the University of Amsterdam, following
an extensive statistical analysis, the experts conclude that many of the experiments described in the articles show an exceptionally linear link. This linearity is not only surprising, but often also too good to be true because it is at odds with the random variation within the experiments.
One of those eight papers was retracted in 2014. In November, the American Psychology Association received an appeal to keep two of the papers, and Förster agreed to the retractions of two more:
A psychiatric journal has pulled a 2014 paper that found electroconvulsive therapy and exercise helped people with depression, after the authors determined they had mistakenly analyzed the wrong data.
According to the retraction notice from the Journal of Psychiatric Research, the researchers had “erroneously analyzed” data from a previous study they had published the year before.
Here’s more from the note for “Electroconvulsive therapy and aerobic exercise training increased BDNF and ameliorated depressive symptoms in patients suffering from treatment-resistant major depressive disorder:” Read the rest of this entry »
A psychology journal is retracting a 2015 paper that attracted press coverage by suggesting women’s hormone levels drive their desire to be attractive, after a colleague alerted the last author to flaws in the statistical analysis.
The paper, published online in November, found women prefer to wear makeup when there is more testosterone present in their saliva. The findings were picked up by various media including Psychology Today (“Feeling hormonal? Slap on the makeup”), and even made it onto reddit.com.
However, upon discovering a problem in the analysis of the data, the authors realized that central finding didn’t hold up, according to Psychological Science‘s interim editor, Stephen Lindsay: Read the rest of this entry »
After a group of researchers noticed an error that affected the analysis of a survey of psychologists working with medical teams to help pediatric patients, they didn’t just issue a retraction — they published a commentary explaining what exactly went wrong.
The error was discovered by a research assistant who was assembling a scientific poster, and noticed the data didn’t align with what was reported in the journal. The error, the authors note, was:
an honest one, a mistake of not reverse coding a portion of the data that none of the authors caught over several months of editing and conference calls. Unfortunately, this error led to misrepresentation and misinterpretation of a subset of the data, impacting the results and discussion.
Needless to say, these authors — who use their “lessons learned” to help other researchers avoid similar missteps — earn a spot in our “doing the right thing” category. The retraction and commentary both appear in Clinical Practice in Pediatric Psychology.
Their first piece of advice in “Retraction experience, lessons learned, and recommendations for clinician researchers” — assume errors will happen, and not vice versa: Read the rest of this entry »
Researchers are correcting a widely covered study that suggested chronic use of pot might not put users at risk of problems later in life.
It turns out that initial, unexpected finding — covered by Newsweek, The Washington Post, Quartz, and (of course) The Stoner’s Cookbook (now known as HERB) — wasn’t quite right, and a reanalysis found users had a small uptick in risk for psychosis. The authors have issued a lengthy correction in Psychology of Addictive Behaviors that includes some supplemental analysis, too.
Not surprisingly, the study’s findings engendered some controversy, which prompted the authors to reanalyze their data, collected from 408 males with varying levels of marijuana use, who were followed from their teens into their 30s.
Now, an American Psychological Association press release that accompanied the initial findings in August contains an editors note explaining why those aren’t quite correct:
The editor’s note — which reads like an Expression of Concern — reiterates the journal’s policy that authors make data and materials available upon request, and notes that staff are following up on “concerns” raised about the study.
There have been numerous requests for data from the “PACE” trial, as the clinical trial is known, which the authors say they have refused in order to protect patient confidentiality. On November 13, James Coyne, a psychologist at the University Medical Center, Groningen, submitted a request for the data from the PLOS ONE paper to King’s College London, where some of the authors were based. According to Coyne’s WordPress blog (he also has a blog hosted by PLOS), the journal asked him to let them know if he “had any difficulties obtaining the data.” He did — KCL denied the request last Friday (the whole letter is worth reading):
The university considers that there is a lack of value or serious purpose to your request. The university also considers that there is improper motive behind the request. The university considers that this request has caused and could further cause harassment and distress to staff.
Last author Peter White at Queen Mary University of London, UK, told us the journal had not asked them to release the data, but he would work with PLOS to address any questions:
We understand PLOS One are following up concerns expressed about the article, according to their internal processes. We will be happy to work with them to address any queries they might have regarding the research.
Here’s the editor’s note for “Adaptive Pacing, Cognitive Behaviour Therapy, Graded Exercise, and Specialist Medical Care for Chronic Fatigue Syndrome: A Cost-Effectiveness Analysis,” in full:
Obesity has retracted a study that suggested overweight people may be less depressed than their slimmer counterparts in cultures where fat isn’t stigmatized, after realizing the authors lied about having ethical approval to conduct the research.
The authors claimed their research protocol had been approved by Norwegian and Bangladeshi ethical committees, but, according to the retraction note, part of the study “was conducted without the required approval of the university ethics board.” The journal’s managing editor told us that there is no evidence that there was harm to the study subjects.
Social psychologist Diederik Stapel has notched his 58th retraction, after admitting he fabricated data in yet another article.
He’s holding onto his 4th place spot on our leaderboard.
This latest retraction is for “Correction or comparison? The effects of prime awareness on social judgments,” published in the European Journal of Social Psychology. As usual for Stapel, this paper has been retracted because he fabricated data.
Here’s the note:
Making error detection easier – and more automated: A guest post from the co-developer of “statcheck”
We’re pleased to present a guest post from Michèle B. Nuijten, a PhD student at Tilburg University who helped develop a program called “statcheck,” which automatically spots statistical mistakes in psychology papers, making it significantly easier to find flaws. Nuijten writes about how such a program came about, and its implications for other fields.
Readers of Retraction Watch know that the literature contains way too many errors – to a great extent, as some research suggests, in my field of psychology. And there is evidence that problem is only likely to get worse.
To reliably investigate these claims, we wanted to study reporting inconsistencies at a large scale. However, extracting statistical results from papers and recalculating the p-values is not only very tedious, it also takes a LOT of time.
So we created a program known as “statcheck” to do the checking for us, by automatically extracting statistics from papers and recalculating p-values. Unfortunately, we recently found that our suspicions were correct: Half of the papers in psychology contain at least one statistical reporting inconsistency, and one in eight papers contain an inconsistency that might have affected the statistical conclusion.
The origins of statcheck began in 2011, Read the rest of this entry »