A graduate student at the University of Oregon in Eugene has admitted to faking data that appeared in four published papers in the field of visual working memory, according to the Office of Research Integrity.
David Anderson’s supervisor at the time was Edward Awh, who has since moved to the University of Chicago.
Anderson told Retraction Watch that the misconduct stemmed from “an error in judgment”:
I made an error in judgment during the processing and dissemination of research materials collected as a graduate student in Dr. Edward Awh’s lab at the University of Oregon. I take full responsibility for my actions, as they do not reflect the integrity of research conducted in the lab of Dr. Edward Awh or by the Department of Psychology at the University of Oregon. I offer my sincerest apologies to Dr. Edward Awh, the University of Oregon, and to the field and its constituents for undermining the integrity of the scientific method and eroding the trust upon which it is built.
Although Anderson appears to still be a graduate student at the university, he could not confirm that to us:
At this time, I am unable to confirm my status as a graduate student at UOE.
Awh told Retraction Watch that he alerted the university to potential problems, and has already requested retractions of the affected papers:
Although you can obtain additional information about the process and outcome from Mr. Anderson and the University of Oregon, I can confirm that I alerted the University of Oregon about the need to conduct an inquiry within hours of learning about potential problems with the research from Mr. Anderson himself. I also cooperated fully in the institutional investigation and honored confidentiality requests during the internal process and pending the ORI announcement. Earlier today, I requested retractions of four papers identified in today’s announcement. None of the other authors of the papers at issue were in any way implicated in the research misconduct, and all agreed to the retractions (including Mr. Anderson).
Here are the papers that contain faked data, according to the ORI:
- “Precision in visual working memory reaches a stable plateau when individual item limits are exceeded,” Journal of Neuroscience (paper 1; cited 88 times, according to Thomson Scientific’s Web of Knowledge)
- “Selection and storage of perceptual groups is constrained by a discrete resource in working memory,” Journal of Experimental Psychology: Human Perception and Performance (paper 2, cited 12 times)
- “The plateau in mnemonic resolution across large set sizes indicates discrete resource limits in visual working memory,” Attention, Perception & Psychophysics (paper 3, cited 20 times)
- “A common discrete resource for visual working memory and visual search,” Psychological Science (paper 4, cited 27 times).
Here are more details about what’s affected, according to the ORI:
ORI found that Respondent knowingly falsified data by removing outlier values or replacing outliers with mean values to produce results that conform to predictions. Specifically, these falsifications appear in:
Figures 4 and 8 in Paper 1
Figures 3C, 3D, and 3E in Paper 2
Figures 3B, 7C, 7D, and 8B in Paper 3
Figures 3E and 3F in Paper 4
According to PubMed, Anderson and Awh published five additional papers together.
Anderson has agreed to research supervision for three years, starting June 23, and to help retract or correct the four papers affected by the misconduct.
Awh forwarded to us a statement from Susan Levine, chair of the department of psychology at the University of Chicago:
The U.S. Department of Health and Human Services Office of Research Integrity today announced that David E. Anderson, a graduate student of the University of Oregon, engaged in research misconduct by falsifying and/or fabricating data in four papers. As noted in the announcement (https://ori.hhs.gov/content/
case-summary-anderson-david-0) , Mr. Anderson entered into a Voluntary Settlement Agreement with HHS and The University of Oregon. Mr. Anderson was a graduate student who worked with University of Chicago Psychology Professor Ed Awh while Dr. Awh was at the University of Oregon. I understand from Dr. Awh that he had alerted the University of Oregon about the need to conduct an inquiry within hours of learning about potential problems with the research from Mr. Anderson himself. I also understand that Dr. Awh cooperated fully in the institutional investigation and honored confidentiality requests during the internal process and pending the ORI announcement. Dr. Awh has requested retractions of four papers identified in today’s announcement. None of the other authors of the papers at issue were in any way implicated in the research misconduct. Ed is not only a leading researcher in cognitive neuroscience, but also a champion of scientific integrity. We are excited to have him join us at the University of Chicago.
We reached out to the University of Oregon, as well as the journals that contain the faked data to find out their plans to retract or correct the literature.
A spokesperson for the Association for Psychological Science, which publishes Psychological Science, told us:
APS doesn’t have a comment at this time – as per the fourth condition stated in the settlement, the journal editors will work with the author and the University of Oregon, Eugene to make any decisions about retracting or correcting the paper as needed.
We heard from Michael Dodd, editor in chief of Attention, Perception, and Psychophysics:
I have forwarded on the retraction request to the folks at Springer to determine the policy/procedure (I’ve been the Editor since January and this is the first such request I have received so I don’t know what the standard procedure is for this relative to an erratum). I can tell you that we will honor the request for retraction as soon as possible. We take these matters very seriously and will do our due diligence in addressing this as quickly as we can.
We also heard from a spokesperson at the American Psychological Association, which publishes the Journal of Experimental Psychology: Human Perception and Performance:
We have just learned of this notice from ORI. APA Journals will read the report and take appropriate action in due course.
Update 7/27/15 5:34 p.m. eastern: A spokesperson for the University of Oregon emailed us a statement from Brad Shelton, interim vice president for research and innovation:
The U.S. Department of Health & Human Services Office of Research Integrity today announced that David E. Anderson, a graduate student of the University of Oregon, engaged in research misconduct by falsifying and/or fabricating data in four papers. As noted in the announcement (https://ori.hhs.gov/content/case-summary-anderson-david-0), Mr. Anderson entered into a Voluntary Settlement Agreement with HHS and The University of Oregon.
The University of Oregon is committed to supporting a research community that operates with the highest level of integrity. Professional misconduct is unacceptable in all forms. As part of our commitment to integrity and academic honesty, we take all accusations of research misconduct seriously and, when appropriate, investigate accordingly. In the case of this isolated incident, our office of Research Compliance Services responded appropriately and in accordance with our policy on research misconduct (http://policies.uoregon.edu/policy/by/1/09-research/research-misconduct-allegations). Our actions have been validated by the decision of the Office of Research Integrity (ORI) and we accept the resolution of the voluntary settlement agreement. This incident highlights the effectiveness of our internal review and resolution procedures. Our faculty and administrators are resolved in their goal of maintaining our high standards of research integrity through vigorous review and continued enhancements to our policies, procedures and training.
The spokesperson also could not confirm whether Anderson is still a student at UOE:
I am not able to confirm whether David E. Anderson is an active student due to [Family Educational Rights and Privacy Act] protections.
Update 7/27/15 6:02 p.m. eastern: Anderson and Awh have already retracted a paper earlier this year — another one in the Journal of Neuroscience due to “an error in the analytic code.” When asked about this, Awh confirmed this earlier retraction was unrelated to the misconduct announced today:
Although Anderson was also first author of the paper retracted earlier this year, the reason for the earlier retraction was as stated (a coding error) rather than because of research misconduct.
Hat tip for update: Elizabeth Clark Polner
Like Retraction Watch? Consider supporting our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, and sign up on our homepage for an email every time there’s a new post. Click here to review our Comments Policy.
I find this case to be quite disturbing. As a faculty and someone who has supervised quite a number of students, I’m always careful about “data massaging.” There is a standard way of removing outliers known as Chauvenet’s criterion. I think that a proper argument can be made to remove outliers but definitely there is no excuse for faking data.
I’m a statistician and I do *not* think that a proper argument can be made for removing “outliers”, especially via a “statistical” criterion like Chauvenet’s. A “statistical” criterion that “flags” “outliers” seems to indicate more to me that there may be a more appropriate model to consider (e.g. t versus normal) than the one that is being considered.
If however a data value was obviously keyed in incorrectly, say because it violates a physical bound, then it should be investigated and hopefully remedied in as reasonable a way as possible. There may be more appropriate ways of dealing with it than deletion.
But ultimately, all of these decisions regarding the data must be carefully documented in the article, perhaps in an appendix.
JD
Needless to say, it would be informative to know what prompted Anderson’s admission.
In 2013, Ronald van den Berg and I analyzed their data in detail because their conclusion seemed implausible to us. We wrote up a critique in http://www.cns.nyu.edu/malab/papers/Van%20den%20Berg%20Ma%202014.pdf. There, we wrote (p. 2126):
“Surprisingly, the value found in the empirical data (R2=.59) is highly unlikely under both models. One possible explanation is that neither model is a good description of the data. However, given that the estimates of wUVM at set size 8 and of the singularity tend to be noisy in small data sets, it seems unlikely under any model to find a strong correlation between these two summary statistics. Therefore, a more plausible explanation may
be that the empirical value is a statistical outlier.”
And in footnote 6, we remark “Our correlation plot in Fig. 1 is not identical to that in Fig. 4b in paper 2, and the R2 we find is lower. The reason is that the singularity estimates reported in paper 2 were inaccurate, due to a mistake in their analysis: Due to poor initialization of the optimization method used for fitting the mixture model, it often returned SDUVM estimates corresponding to a local maximum, instead of the global maximum of the likelihood function. After correcting this mistake, Anderson and colleagues find a different plot and an R2 of about .55 instead of .65 (personal communication with the authors).”
In other words, we attributed their implausible results to a combination of sloppiness and chance. We never thought of fraud.
Much of the academic enterprise relies on trust, and unless you are in a hot field, people are unlikely to scrutinize your data and results. (And if you are a trainee, it is common that even your PI will not.) I hope this episode will contribute to more replication, data sharing, and re-analysis.
I think data sharing, replication, and re-analysis will certainly help in many cases. But your own account suggests that in this incident re-analysis didn’t really help. And it sounds like it wasn’t replication or data sharing that did Anderson in either. Do we know what it was?
Maybe re-analysis helped in the sense that we raised doubts about whether good practices were followed and whether results from David were believable. But then again, people remember fancy headlines more than technical corrections. And certainly we would not have been able to detect subtle, willful manipulation of the data; there are statistical ways for such detection but we suspected nothing, and I am not advocating for paranoia.
A deeper discussion to have is to what extent competition for postdoc and faculty positions has become seriously fucked up. Although it is easy to vilify individuals, the uncertainty of depending on a few data points to get a bright career in science is maddening. I like labs and search committees that hire based on methodological rigor almost independent of the results, but I don’t think most places are like that.
Ironically, the same data set would most likely have been very interesting without the “plateau” claim. It is possible that David was too attached to that erroneous concept.