The Office of Research Integrity says Adam Savine, a former post-doc graduate student in psychology at Washington University in St. Louis, committed misconduct in work that tainted three papers and six abstracts he submitted to conferences.
One of Savine’s studies that drew some media attention involved Diederik Stapel-esque research showing which brain region lights up when people see money. He was quoted in this 2010 article on Medical News Today saying:
“We wanted to see what motivates us to pursue one goal in the world above all others,” Savine says. “You might think that these mechanisms would have been addressed a long time ago in psychology and neuroscience, but it’s not been until the advent of fMRI about 15-20 years ago that we’ve had the tools to address this question in humans, and any progress in this area has been very, very recent.”
Apparently, now we know. According to the notice, Savine engaged in misconduct in research funded by four grants:
National Institute of Mental Health (NIMH), National Institutes of Health (NIH), grant R56 MH066078, National Institute on Drug Abuse (NIDA), NIH, grants F31 DA032152 and R21 DA027821, and National Institute on Aging (NIA), NIH, grant T32 AG00030.
ORI found that Savine had falsified data in the following three papers:
- Savine, A.C., & Braver, T.S. “Local and global effects of motivation on cognitive control.” Cogn Affect Behav Neurosci. 12(4):692-718, 2012 Dec. (not yet cited, according to Thomson Scientific)
- Savine, A.C., McDaniel, M.A., Shelton, J.T., Scullin, M.K. “A characterization of individual differences in prospective memory monitoring using the Complex Ongoing Serial Task.” J Exp Psychol Gen. 141(2):337-62, 2012 May (cited three times)
- Savine, A.C., & Braver, T.S. “Motivated cognitive control: Reward incentives modulate preparatory neural activity during task-switching.” J Neurosci. 30(31):10294-305, 2010 Aug 4 (
not yetcited 31 times)
He also falsified data in these six conference abstracts:
- Savine, A.C., & Braver, T.S. (November 2010) “The contextual and local effects of motivation on cognitive control.” Psychonomics Society, St. Louis, MO
- Savine, A.C., & Braver, T.S. (November 2010) “A model-based characterization of the individual differences in prospective memory monitoring.” Psychonomics Society, St. Louis, MO
- Savine, A.C., & Braver, T.S. (November 2010) “Motivated cognitive control: Reward incentives modulate preparatory neural activity during task-switching.” Society for Neuroscience, San Diego, CA
- Savine, A.C., & Braver, T.S. (June 2010) “Motivated cognitive control: Reward incentives modulate preparatory neural activity during task-switcing.” Motivation and Cognitive Control Conference, Oxford, England
- Savine, A.C., & Braver, T.S. (January 2010) “Neural correlates of the motivation/cognitive control interaction: Activation dynamics and Performance prediction during task-switching.” Genetic and Experiential Influences on Executive Function, Boulder, CO
- Savine, A.C., & Braver, T.S. (June 2009) “Incentive Induced Changes in Neural Patterns During Task-Switching.” Organization for Human Brain Mapping, San Francisco, CA
Here’s what Savine acknowledged doing:
- falsified data in Cogn Affect Behav Neurosci. 2012 to show an unambiguous dissociation between local and global motivational effects. Specifically, Respondent exaggerated (1) the effect of incentive context on response times and error rates in Table 1 and Figures 1 and 3 for experiment 1 and (2) the effect of incentive cue timing on response times and error rates in Table 2 and in Figures 6, 9, and S2 for experiment 2.
- falsified data in J Exp Psychol Gen. 2012 to show that prospective memory is influenced by three dissociable underlying monitoring patterns (attentional focus, secondary memory retrieval, information thresholding), which are stable within individuals over time and are influenced by personality and cognitive differences. Specifically, Respondent modified the data to support the three category model and to show (1) that individuals fitting into each of the three categories exhibited differential patterns of prospective memory performance and ongoing task performance in Tables 1-3; Figures 5-8, and (2) that certain cognitive and personality differences were predictive of distinct monitoring approaches within the three categories in Figure 9.
- falsified data in J Neurosci. 2010 and mislabeled brain images to show that motivational incentives enhance task-switching performance and are associated with activation of reward-related brain regions, behavioral performance, and trial outcomes. Specifically, Respondent modified the data so that he could show a stronger relationship between brain activity and behavior in Table 2 and Figure 4 and used brain images that fit the data rather than the images that corresponded to the actual Talairach coordinates in Figure 3.
Unfortunately for one of Savine’s Wash U. colleagues and co-authors, Todd Braver, a 2013 paper of his in Frontiers in Cognition titled “Temporal dynamics of motivation-cognitive control interactions revealed by high-resolution pupillometry,” might suffer collateral damage. It cites two of the soon-to-be-retracted articles — which might necessitate a correction, or even a retraction.
Savine — whose name appears on the Neurotree website — has agreed to submit to a three-year supervisory period for any work involving funding from the Public Health Service. He spent some time after Wash U at the University of Michigan, in the lab of John Jonides, but left in the fall, according to the lab.
Savine won a 2011 travel award from the Society for Neuroscience, and gives tips on how to apply for that grant here. That might be advice best unheeded, much like grant consulting services from Michael Miller, another neuroscientist found to committed misconduct by the ORI.
Hat tip: Rolf Zwaan
Whoo-boy. I know some of these folks. Never met this particular guy Savine, but the co-authors are known to me.
Clarification here: Adam Savine was a PhD candidate at Washington University, not a postdoctoral researcher.
Fixed — thanks.
“Savine won a 2011 travel award from the Society for Neuroscience, and gives tips on how to apply for that grant here.” This video now “is private” after people including me left comments.
On Pubmed, he has 6 papers. I wonder if the other ones have been checked as well. Also, it is unclear how he was caught.
“Based on the report from Washington University in St. Louis (WUSTL) and Respondent’s admission…”
Sounds like someone dropped a dime on him at his home institution. Mentor? Colleague? Points to WashU for not sweeping it under the rug, I guess.
Two things bug me about this:
1. For two of these papers, Braver T. was second ( and senior ) author. Does this mean the second author did nothing?
2. The 2010 JoN had zero citations. Does that mean that people in the field were suspicious of the findings? An average 2010 JoN paper should have 10-15 citations.
Apols, I was looking at the wrong entry in Web of Science. That study was actually cited 31 times, as now noted in the post using a strikethrough. Thanks for flagging it.
How was it possible for this graduate student to publish three papers and six conference abstracts and submit false data in four NIH grants while supposedly being trained,and supervised by senior faculty ? Should the mentor(8s) not have been made to share responsibility for the misconduct. If that were done more often, and publicized, I believe it would likely improve the quality of mentoring.
Don Kornfeld
Hard to say. Having been present during a case of data fabrication, I can tell you that it is not always obvious. One of my fellow students worked long hours in the lab, and we all thought he was a dynamo. Turned out that all of his hard work was involved in fabricating and falsifying data, and his advisor was extremely lucky (and smart) to have asked the right questions and pushed hard for the answers despite the student’s adept evasions and outright lies. I think we are all prepared to deal with inexperience and incompetence, but complex deceptions are hard to anticipate.
I’m sure it’s not easy yet David Wright and Susan Titus at ORI reviewed 154 closed ORI cases and based on their review of the investigations by the home institutions and ORI reported that 75% of mentors had not reviewed source data and two thirds “had not set standards”..They also found that over 50% of the trainees reported some kind of stress. Hopefully, not caused by the mentors, bur perhaps amenable to their psychological support.which was apparently not provided.
Don Kornfeld
Wright,DE, titus,D L, Cornelison,JB, Mentoring and Resrearch Misconduct : An Analysis of Research Mentoring in Closed ORI cases , Sci.Eng.Ethics, (2008)14: 323-336.
“They also found that over 50% of the trainees reported some kind of stress”
I’m not sure why this is surprising. With the current job market in academia, of course trainees are stressed. Probably, the motive behind this fabrication case was career advancement.
I agree, career advancement in one form or another is indeed the stressor..That stress is .universally experienced by trainees and to a lesser degree by others. Two questions arise: Who among the trainees
will succumb to the pressure ? What can be done to prevent it ?
Identifying the potential perpetrators is difficult,however, the life long perfectionist is a very likely candidate. Good mentors, in their supportive role,should be able to identify that trait and begin the process of introducing realistic goals for such a trainee..The other,less compulsive trainees can be reassured that one failed experiment need not destroy one’s career.
Mentors in their purely supervisory role should,,in most cases, be able to detect the false data before it becomes the basis for a publication.
Don
:
‘the life long perfectionist is a very likely candidate”. I think the extroverted sociopath is an even more likely candidate. In the absence of real data, we’re all just speculating.
“ORI reported that 75% of mentors had not reviewed source data and two thirds “had not set standards”
Success has many fathers, failure is an orphan. While I daresay some mentors may have provided candid responses and some mentors would willing report fraud by their trainees to the ORI – I think there was a Case Western example recently – however in general you would get a more reliable data set by writing all possible explanations on pieces of paper, put them in a hat and draw them out at random for each case
I have also reviewed 146 ORI reports from the same period reviewed by Wright and Titus. Those reports do contain the information which would allow the identification of instances of deficient mentoring.
As they point out 75% of mentors had not reviewed source data and two thirds “had not set standards”..that is not that difficult to identify as these stories unfold.
They also had the impression that over 50% of the trainees reported some kind of stress.
I agree with them that a good mentor might be able to provide a supportive relationship which would minimize the likelihood that a trainee might panic and commit an act of misconduct.
Don K
Interestingly, I reviewed a version of the CABN paper, but for a different journal (where it was rejected). In my review I raised questions about the unusually high error rates in both experiments, although it did not cross my mind that the data may have been falsified.
If it turns out the mentor was the whistle-blower, we need to be very careful about potentially tainting him. Reporting one’s student is an emotionally wrenching thing to do, I’m sure, though the ethical need to do so is clear. Very few mentors check raw data, etc., from seemingly capable grad students. This mentor should be congratulated for his courage, oversight, and ethical conduct. And ORI and retractionwatch.com should make appropriate modifications to the announcement; the last thing we want to do is intimidate watchdogs and whistleblowers by innuendo and insinuation.
I know the senior author, and he is a very reputable guy, who is probably in great distress over this situation. If you work with someone who you mentor, that person is like your son, and seeing that person in the true light as a cheater and faker can only be very painful.
He may well be, but, you see, how do we know for sure? Many people who were caught cheating had colleagues who thought very highly of them before they were found out. Think of Mark Hauser, for example. Suspicion due to guilt by association ends up infiltrating everything when you start looking at science through this lens.
I looked at the linked ORI report, which says the investigation was “Based on the report from Washington University in St. Louis (WUSTL) and Respondent’s admission”. Since no one else was implicated by the investigation, this pretty much assures it was his mentor –or someone else in his lab — who reported the research misconduct. As I said, we need to be very careful with our words in such situations. Comments like yours, Average PI, or yours, Dr. Kornfeld, might create a McCarthy-like environment for future whistle-blowers who will be dissuaded from reporting misconduct by this kind of innuendo, the last thing we’d want..
retiredPI, I’m not that influential, I believe. It is not my words that create McCarthy-like environment. It is the reality of dealing with these issues in public: if one is associated with a fraudsters, then one is a suspect. That’s not just me saying it, it’s the way it is, unfortunately.
Yes, much remains to be done to protect whistleblowers..They are a major source of information leading to
the detection of misconduct. However,this mentor who identified the misconduct of his graduate student should not be considered a whistleblower.He was merely doing his job, unfortunately, rather late in the game. While it is commendable that he finally identified fraudulent data, the question that must be asked is, why the fraudulent data had not been identified in those earlier publications.which I assume he co-auithored.
I am not a bench scientist. I can urge closer trainee/mentor relationships and supervision which, obviously, in many ways, should reduce misconduct in trainees. Is it unreasonable for agencies which fund training grants to expect a realistic ratio of trainees to mentor ? To establish guidelines for mentordhip ? To hold mentors who co-authored a trainee’s paper to share responsibility if it contains fraudulent data ?
Don Kornfeld
How about everybody put down that handful of stones you were about to cast and take a minute to read this interview with Savine’s mentor:
http://www.stltoday.com/lifestyles/health-med-fit/washington-u-student-s-mentor-talks-about-discredited-research/article_e2275d60-1ead-5906-851a-59c7a4daf6e5.html
What a nightmare. I was particularly bothered by this:
“Specifically, I was not informed or forewarned by either WUSTL or ORI (the Office of Research Integrity) about the facts discovered in Adam’s case, nor that these would be reported by ORI on a publicly accessible website. You learned of the outcome of Adam’s case before I did, which I am pretty upset about — given that I was the one to report him in the first place.”
Prior to this interview my reading of this case was the most likely people to have raised the alarm were Scullin, M.K also at the same University.
What Bravers claims may be true, it is impossible for me to judge, the best way to avoid these situations is for complete transparency. In this case the report of Washington University supplied to the ORI being made available.
As far as it goes my understanding that the whole fMRI is rife with bodginess. Savine must have done something very overt indeed to fall afoul in this field notoriously lacking in rigor.
This sounds like unwarranted overgeneralization to me. I don’t know the fMRI field well, but my general impression is that the rigor is probably not lower than what you have in most system’s neuroscience subfields.
The real problem is people using tools and techniques they do not understand well, I think. For example, if you have an artist with no neuroscience background use fMRI to do an uncontrolled study about cubist paintings, then you will have a fluffy fMRI study that is difficult to interpret. That is true of every technique, I think.
The mentor has posted a response on his lab web site http://ccpweb.wustl.edu//ORIresponse.html
Dr. Braver’s interview, as reported in the post above ( mri. techie),raises fundamental questions regarding the responsibility of the mentor.of a trainee in science. He was asked why data fraud in three published papers, which he co-authored, and four training grant applications were not detected ? His defense was the need to provide a trainee with, “autonomy”..
How should autonomy be defined ? How much and what kind should be allowed and when ?
Does a trainee’s autonomy ever excuse a mentor of responsibility for a trainee’s paper published with flawed data ? In my opinion joint authorship requires joint responsibility.
Should specific guidelines for mentorship be established ? Would it be helpful for funding agencies to establish realistic ratios of trainees to mentor ? Should funding agencies not consider the “quality of the training” as well as the quality of the science as a criterion when awarding funds ?
In his Post-Dispatch interview,Dr. Braver reports that their laboratory will, “In the future make all aspects of data analysis and processing more transparent and reproducible by others” Why should that not be a universal policy ?
Don Kornfeld
Shortcuts are taken because in our society science is a competitive endeavor and there is only a small and finite pool of funds available that is given out to the most “productive” labs. It would take much longer to check things multiple times and to include more safeguards in the process.