Last month, we reported on the upcoming publication of a new book by Marc Hauser, the former Harvard psychologist found guilty of misconduct by the Office of Research Integrity. The main thrust of our post was questioning why two leading science writers would blurb the new book, Evilicious, but we also pointed out that Hauser hadn’t even bothered to note in his list of publications that one of his papers had been retracted. That seemed consistent with his neither admitting nor denying misconduct, as is reported by the Office of Research Integrity in their findings.
A few days after our post ran, Hauser tweeted:
some have rightly pointed out that i didn’t flag the one paper i have retracted. that has been corrected at http://mdhauser.blog.com
When we saw that tweet last week and went to Hauser’s site to check, it seemed that the paper was still listed, without any notation. So we asked on Twitter whether he was sure he’d made a change. He said he was:
@ivanoranskyit is flagged as such (in red and with the word RETRACTED). refresh your browser
When we checked again, it certainly was marked as Hauser said:
33. Hauser, M.D., Weiss, D. & Marcus, G. (2002). Rule learning by cotton-top tamarins. Cognition, 86: B15-B22. RETRACTED
So, a step forward. But Hauser couldn’t leave it there. He had to add the “but I was right!” protestation we see too often with those forced to retract papers for misconduct:
@md_hauser might also see replications by murphy et al 2010, Science; Neiworth, 2013, Behav Proc
Perhaps it was appropriate that the way he wrote his tweet, he was talking to himself.
Any work product or commentary from this man should be totally disregarded—period!
Ed is right. How can you trust anything Hauser does?
Whether or not his work has been replicated is irrelevant. It doesn’t mean he didn’t cheat. He could have (did) made it up and happened to guess right.
Science depends on honesty and trust. Untrustworthy people like Hauser should pick a new profession.
Is this book partly based on on some of the morality work he’s done that two independent labs can’t replicate? http://psychfiledrawer.org/replication.php?attempt=MTYy
@failuretoreplicant: Nice website! Some studies I’d assumed were crap turn out to to replicate well. Others that I’d thought must have substance turn out to be irreproducible. Perhaps I’ve been too hard on psychology. At least, your site suggests that there is real science to be done and that someone is doing it.
for those who care, these supposed failures to replicate were not actual replications. i have posted responses on psychfiledrawer to both, but reproduce them here: a paper by Greene et al. and by Waldman & Dietrich claim to be failures to replicate a paper by Hauser et al on moral judgments. And yet both paper use importantly different methodologies and materials and thus, don’t constitute replications:
1. Greene et al.
Important methodological details ignored
The study by Greene et al used importantly different methods from Hauser et al that are not reported in this submission:
1. As noted, Greene et al had an overall sample size of 366 vs approximately 5000 in Hauser et al.
2. Greene’s subjects responded to the dilemmas in a public space with the experimenter present; subjects in Hauser et al responded anonymously on the web, at the Moral Sense Test.
3. Greene’s subjects were tested in Boston and New York City, whereas Hauser et al. tested subjects from 120 countries, and from a much wider age range.
4. Greene et al. accompanied their dilmmmas with diagrams, whereas Hauser et al. did not.
All of these methodological differences could make substantial differences to the outcomes observed, and must be tested. Further, and putting these issues to the side, Greene et al. provide a different explanation for the contrasting results, that are not about failure to replicate, but about important aspects of the wording of dilemmas “Our finding of equivalence between the loop (intentional harm) and loop weight (harmful side-effect) dilemmas directly contradicts some earlier findings (Haus- er et al., 2007; Mikhail, 2000),2 but is consistent with other earlier findings (Waldmann & Dieterich, 2007). Following Waldmann & Dieterich, we attribute the effects observed by Hauser et al. (2007) and Mikhail (2000) to a confound whereby the loop dilemma, but not the loop weight dilemma, refers to the victim as a ‘‘heavy object.” (‘‘There is a heavy object on the side-track. . . The heavy object is 1 man. . .” vs. ‘‘There is a heavy object on the side-track. . . There is 1 man standing on the side-track in front of the heavy object. . .”)
2. W& D
important methodological details ignored
The comparison of W&D to Hauser et al is interesting, but the authors fail to note extremely important methodological differences between the studies:
Methodological issue W&D Hauser et al
Sample size study 1=56; study 2=123 1 study: 5000
Subject population college students in Germany from 120 countries,
wide age range and
educational background
Scenarios study 1: one variant of trolley 1 study: 4 trolley problems
problem; study 2: 3 variants of
trolley problem
Scenario presentation in class, paper and pencil on the web, the Moral
Sense Test site
In addition to the above, even the trolley problems were not worded in the same way at all, including W&D’s use of a bus as opposed to a trolley, the composition of the victims, and so on. Given these factors, stating that there were “No” methodological differences is incorrect.
In all seriousness, it’s too bad that, back in 2002, Hauser didn’t break up his manuscript into two smaller papers: (1) a paper with scientific speculation, giving his theory, how to test it, and what data he would expect to see in a properly conducted experiment, and (2) the experimental results. Then when it turned out that he did not openly report and interpret his experimental results, he could retract paper 2 and keep paper 1, and he could say that paper 1 was replicated. Cos that’s what he’s saying, right? That his data didn’t say what his data said, but his speculation was correct?
Also, as the commenter above notes, for consistency it would make sense for him to also tweet something to himself about all his so-far unretracted papers that failed to replicate. Or does replication only go one way for him? My guess is that if his results failed to replicate, he still thinks his theories are correct, and he is just waiting for future researchers to replicate them, ultimately.
I would think that, as a psychologist, Hauser would be interested in the psychological phenomenon that he acts as if his theories are so bullet-proof that he can never make a mistake, that somehow publication bestows upon a speculation some permanent air of authority.
Investigating Research Integrity? Better start by investigating the Office of Research Integrity!
It is ironic that the Office of Research Integrity (ORI) is regarded as a stronghold of the ethical standards in academic research. For those who have closely examined trial proceedings involving the ORI nothing is further from the truth. ORI surely catches some of the bad science that appears to be rampant, but its incompetence and abusive behavior often pass unnoticed to the general public. If you examine closely the notorious Baltimore-Imanishi-Kari case you will see exactly what I am talking about.
The ORI has very little oversight and operates pretty much like in the margins of democratic transparency. Unfortunately, its incompetence becomes apparent only in cases where their proceedings are brought to light, like in the Baltimore-Imanishi-Kari case. The fact that ORI lacked expertise to properly assess that case did not deter the ORI from performing a “statistical analysis” of the data under scrutiny and concluding (incorrectly) that Dr. Imanishi-Kari had committed fraud. Nothing more dangerous than drawing conclusions from statistics on data when you don’t know what the data means! But ORI did not treat their sloppy findings with caution (after all, who cares about destroying a human being?). To justify their existence as the ethics rottweiler, the ORI invested heavily on Imanishi-Kari downfall, they bullied the institution where she was working (after all, nobody wants to lose NIH financial support), and trashed a good 5-6 years of her life. When she brought the right experts to trial, she won her case with flying colors, revealing the venality and incompetence of the ORI. She could have gotten tens of millions from NIH but chose not to sue, as far as I know.
Ethical standards? Beware of people who talk too much about ethics! Case in point: Alan Price, the former ORI director. Alan Price, the ORI insider, now offers his consultancy services to institutions that investigate misconduct and must report to ORI, so the institutions can be more effective at neutralizing witnesses and destroy reputations to justify the role of ORI in society. Furthermore, ORI even recommends Alan Price as advisor to the institution. Any conflict of interest here?
It is true that ORI has a job that few would enjoy. It is hard to imagine a successful scientist working at ORI. Yet, its role is viewed as important to the taxpayer. But this perception will quickly change, especially as ORI’s actions are brought to light and Congress becomes more and more aware of their tactics. Bring to light the ORI proceedings, and the agency disintegrates in thin air.
Well, Xi, you are simply wrong in your conclusions. Now that I have retired from ORI, some institutional officials have indeed retained me to provide advice on how to conduct their research misconduct investigations, including how best to comply with the ORI regulations and questions. You are wrong about this being done so that institutions can somehow “neutralize witnesses” and “destroy reputations” — my advice focuses on how institutions need to follow the ORI regulations while conducting investigations and do their best to protect the position and reputation of the parties, who are assumed to be innocent and honest until the evidence or their admissions proves otherwise. See my website for services:
http://www.researchmisconductconsultant.com/Services.htm
During my term of service at ORI from 1989-2006, ORI had many successful scientists, who had been federal grant-holding faculty members at universities (like Hallum, Price, Fields, Krueger, Mossimann, Abbrecht) or researchers at government laboratories (like Williams, Davidian, Dahlberg, Narva, Fields) and/or small business labs (like Dahlberg) — and they chose to devote the last phases of their rich careers to promoting and upholding ethical standard in research on behalf of the scientific community and the taxpayers at ORI. See my paper on ORI history: Accountability in Research 20, 291-319, 2013 at http://www.tandfonline.com/doi/pdf/10.1080/08989621.2013.822238#.UkmnjZ0o5eU
I imagine that “Xi Han Wang” is a pseudonym used by somebody whose research has been questioned.