Archive for the ‘psychology’ Category
Oh, well — “love hormone” doesn’t reduce psychiatric symptoms, say researchers in request to retract
It turns out, snorting the so-called “love hormone” may not help reduce psychiatric symptoms such as depression and anxiety.
At least, that’s the conclusion the authors of a 2015 meta-analysis, which initially found intranasal doses of oxytocin could reduce psychiatric symptoms, have now reached. After a pair of graduate students pointed out flaws in the paper, the authors realized they’d made some significant errors, and oxytocin shows no more benefit than placebo.
Last week, a study brought into question years of research conducted using the neuroimaging technique functional magnetic resonance imaging (fMRI). The new paper, published in PNAS, particularly raised eyebrows for suggesting that the rates of false positives in studies using fMRI could be up to 70%, which may affect many of the approximately 40,000 studies in academic literature that have so far used the technique. We spoke to the Anders Eklund, from Linköping University in Sweden, who was the first author of the study. Read the rest of this entry »
JAMA authors have retracted — and replaced — a 2014 paper about the mental health effects of household moves on kids, after they found errors while completing an additional analysis.
The original paper concluded that in “families who moved out of high-poverty neighborhoods, boys experienced an increase and girls a decrease in rates of depression and conduct disorder,” according to a press release issued by the journal along with the paper (which also got some press attention from Reuters). But part of that conclusion is wrong.
Researchers have fixed a number of papers after mistakenly reporting that people who hold conservative political beliefs are more likely to exhibit traits associated with psychoticism, such as authoritarianism and tough-mindedness.
As one of the notices specifies, now it appears that liberal political beliefs are linked with psychoticism. That paper also swapped ideologies when reporting on people higher in neuroticism and social desirability (falsely claiming that you have socially desirable qualities); the original paper said those traits are linked with liberal beliefs, but they are more common among people with conservative values.
We’re not clear how much the corrections should inform our thinking about politics and personality traits, however, because it’s not clear from the paper how strongly those two are linked. The authors claim that the strength of the links are not important, as they do not affect the main conclusions of the papers — although some personality traits appear to correlate with political beliefs, one doesn’t cause the other, nor vice versa.
In total, three papers have been corrected by authors, and a correction has been submitted on one more.
We’ll start with an erratum that explains the backstory of the error in detail. It appears on “Correlation not Causation: The Relationship between Personality Traits and Political Ideologies,” published by the American Journal of Political Science: Read the rest of this entry »
PLOS ONE has republished data that were abruptly removed two weeks ago after the authors expressed concerns they did not have permission to release them.
The dataset — de-identified information from people with chronic fatigue syndrome — was removed May 18, noting it was “published in error.” But this week, the journal republished the dataset, saying the authors’ university had been consulted, and the dataset could be released.
This paper has drawn scrutiny for its similarities to a controversial “PACE” trial of chronic fatigue syndrome.
Here’s the second correction notice for “Therapist Effects and the Impact of Early Therapeutic Alliance on Symptomatic Outcome in Chronic Fatigue Syndrome,” released June 1:
According to the retraction notice released by the journal last week, the paper contains “extensive verbatim use of text from other sources.”
How did this make it past the editors? The journal published the paper in 2012 — before it began screening papers for plagiarism, according to a spokesperson.
A psychology journal is correcting a paper for reusing data. The editor told us the paper is a “piecemeal publication,” not a duplicate, and is distinct enough from the previous article that it is not “grounds for retraction.”
The authors tracked the health and mood of 65 patients over nine weeks. In one paper, they concluded that measures of physical well being and psychosocial well being positively predict one another; in the other (the now corrected paper), they concluded that health and mood (along with positive emotions) influence each other in a self-sustaining dynamic.
The study, published today in the Proceedings of the National Academy of Sciences, used data from the psychology replication project, which found only 39 out of 100 experiments live up to their original claims. The authors conclude that more “contextually sensitive” papers — those whose background factors are more likely to affect their replicability — are slightly less likely to be reproduced successfully.
They summarize their results in the paper:
After PLOS ONE allowed authors to remove a dataset from a paper on chronic fatigue syndrome, the editors are now “discussing the matter” with the researchers, given the journal’s requirements about data availability.
As Leonid Schneider reported earlier today, the 2015 paper was corrected May 18 to remove an entire dataset; the authors note that they were not allowed to publish anonymized patient data, but can release it to researchers upon request. The journal, however, requires that authors make their data fully available.
Scientific fraud isn’t what keeps Andrew Gelman, a professor of statistics at Columbia University in New York, up at night. Rather, it’s the sheer number of unreliable studies — uncorrected, unretracted — that have littered the literature. He tells us more, below.
Whatever the vast majority of retractions are, they’re a tiny fraction of the number of papers that are just wrong — by which I mean they present no good empirical evidence for their claims.
I’ve personally had to correct two of my published articles. Read the rest of this entry »