Authors have retracted two papers about visual perception and working memory from the Journal of Cognitive Neuroscience, after the first author admitted to falsifying or fabricating data in four other papers.
The authors have requested another two retractions, as well, which will bring the total for Edward Awh and his former graduate student David Anderson to nine retractions. (Earlier in 2015, they lost a paper due to an error in the analytic code, which Awh told us was unrelated to the misconduct.)
The retraction notice attached to both articles cites a 2015 settlement agreement between the Office of Research Integrity and first author Anderson (the “respondent”), who admitted to misconduct while working as a graduate student in the lab of Awh at the University of Oregon in Eugene. Since then, “additional problems” were discovered in the newly retracted articles, such as removed data points.
Awh, who has since moved to the University of Chicago, sent us a lengthy statement, explaining the concerns about each article:
Given the prior ORI findings and our discovery of problems with other papers first authored by the Respondent, all of us who have co-authored papers with the Respondent as first author have agreed that we do not have confidence in the integrity of the findings in those papers.
Awh told us he has also requested retractions for two other articles: “Attending multiple items decreases the selectivity of population responses in human primary visual cortex,” published in 2013 by the Journal of Neuroscience, and a 2015 book chapter in Mechanisms of Sensory Working Memory, “Statistical regularities allow multiple feature values to be stored as discrete units.”
Awh told us he expects those additional retractions in the near future:
I’ve heard from the Journal of Neuroscience, and a retraction notice is now approved and in the pipeline. They let me know that it would be 2-3 weeks before it is posted…our attempt to retract a chapter is still in process and I don’t have a clear timeline on when that will be completed.
Here’s the retraction note for “Electrophysiological evidence for failures of item individuation in crowded visual displays” (article 1, cited four times, according to Thomson Reuters Web of Science) and “Polymorphisms in the 5-HTTLPR gene mediate storage capacity of visual working memory” (article 2, cited 14 times):
On August 1, 2015, the Office of Research Integrity (ORI) announced a settlement agreement with David E. Anderson, the Respondent (http://ori.hhs.gov/content/ case-summary-anderson-david). On the basis of the Respondent’s admission and an analysis by the University of Oregon, ORI concluded that the Respondent had engaged in research misconduct by falsifying and/or fabricating data in four publications. Those publications were retracted immediately after the release of the ORI findings.
Since that time, additional problems have been discovered with Article 1 above. Data points shown in Figure 8 were removed without justification and in contradiction to the analytic approach described in the methods and results. In light of this discovery and of the previous ORI findings, authors Bell and Awh no longer have confidence in the integrity of the data in Article 2. For these reasons, all authors on both articles (including the Respondent) have agreed to the retraction of Articles 1 and 2 above.
Here’s what went wrong with “Electrophysiological evidence for failures of item individuation in crowded visual displays,” according to Awh:
…we found problems with the results that were reported in Figure 8. In contradiction to the analytic approach reported in the methods as well as the specific statistics that were reported for Experiment 2, data points were omitted from the correlational analysis reported in Figure 8. Thus, our conclusion is that the findings reported in Figure 8 are not trustworthy because of the unjustified exclusion of data.
For the upcoming Journal of Neuroscience retraction, Awh told us:
… we found that the core empirical patterns observed in the BOLD data alone were replicated based on a completely new analysis of the raw data. However, there was a problem with the correlational analysis that was reported in Figure 7. Two of 14 subjects were omitted from this correlational analysis, in contradiction to the analyses that were described in the methods section as well as the statistics that were reported for other aspects of the data from that study. When all data points are included in this analysis, the reported correlation between neural activity and behavioral performance was not observed.
The Journal of Neuroscience paper has been cited 10 times.
Awh added that every author — including Anderson — agreed to the four retractions.
You can read Awh’s entire statement here.
Awh noted that there is a paper he co-authored with Anderson that contains original data which he has not requested be retracted: “A neural measure of precision in visual working memory,” published by the Journal of Cognitive Neuroscience in 2013.
For this paper, [first author] Ester was the author primarily in charge of data analysis, while Anderson assisted in data collection and in some of the pre-processing stages of MRI analysis. We are confident in the integrity of that analysis, so we are not retracting that paper.
We contacted Anderson to see if he had any reaction to these additional changes to the literature. He spoke to us in August when the ORI’s decision was announced:
I made an error in judgment during the processing and dissemination of research materials collected as a graduate student in Dr. Edward Awh’s lab at the University of Oregon. I take full responsibility for my actions, as they do not reflect the integrity of research conducted in the lab of Dr. Edward Awh or by the Department of Psychology at the University of Oregon. I offer my sincerest apologies to Dr. Edward Awh, the University of Oregon, and to the field and its constituents for undermining the integrity of the scientific method and eroding the trust upon which it is built.
Although we have been following this case, we learned about the newest retractions from Awh himself, who sent us the notices along with a detailed explanation of what happened. We appreciate his choice to be responsible and transparent in fixing the literature — a model case of doing the right thing.
Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our new daily digest. Click here to review our Comments Policy.
As someone dealing with Complex Regional Pain Syndrome, I wait and hope for a study/research/new pain management medicine/a cure, and the more I read about retractions, how can I trust anyone, anymore, in this area?
I don’t see this as a case of doing the right thing. NINE retractions? Clearly the mentor was negligent at best.
Professor Awh appears to have worked diligently to clean up this problem from the time that Anderson revealed issues with his data, as described in the July 27, 2015 Retraction Watch discussion linked above.
All transgressions were conducted by Anderson, by his admission. No other students or colleagues of Awh are implicated here. So I would not lay a charge of negligence on Awh at this point. Awh unfortunately took in a student who cherry-picked data on numerous occasions. How would Awh know about that? Eventually Anderson admitted to problems, and Awh has apparently worked earnestly ever since to clean up multiple papers. Awh is making statements such as “Thus, our conclusion is that the findings reported in Figure 8 are not trustworthy because of the unjustified exclusion of data.” instead of the shameless statement other researchers trot out, “This does not change any of our findings”.
If Awh had a series of students and collaborators with projects and publications showing problems, one could start assessing his competence. But at this point, with only one student showing problems and Awh working to clean them up, I agree with the Retraction Watch folks that this is an example of doing the right thing.
Yes I agree with Steven on this. Although too often we see a student being blamed for “errors” in a paper, or scape-goated, I think we should remember that there is a “train of trust” in any scientific endeavour. You need to trust that the person who made a buffer for you made it correctly, or someone who compiles a figure did not deliberately exclude data. This is especially true in large research labs.
Obviously this does not exclude the need for a group leader to check papers for errors etc before publication but if a primary researcher sets out with a deliberate intention to deceive it will be hard to discern this – especially if they seem trustworthy.