The social psychology community, already rocked last year by the Diederik Stapel scandal, now has another set of allegations to dissect. Dirk Smeesters, a professor of consumer behavior and society at the Rotterdam School of Management, part of Erasmus University, has resigned amid serious questions about his work.
According to an Erasmus press release, a scientific integrity committee found that the results in two of Smeesters’ papers were statistically highly unlikely. Smeesters could not produce the raw data behind the findings, and told the committee that he cherry-picked the data to produce a statistically significant result. Those two papers are being retracted, and the university accepted Smeesters’ resignation on June 21.
The release also takes pains to say that the university has no reason to doubt the work of his co-authors. You can read the complete report in Dutch, with Smeesters’ co-authors’ names blacked out, in an NRC Handelsblad story.
Erasmus tells Retraction Watch that these are the two papers being retracted:
- Johnson, C.S., Smeesters, D.H.R.V. & Wheeler, S.C. (2012). Visual perspective influences the use of metacognitive information in temporal comparisons. Journal of Personality and Social Psychology.
- Smeesters, D.H.R.V. & Liu, J. (2011). The effect of color (red versus blue) on assimilation versus contrast in prime-to-behavior effects. Journal of Experimental Social Psychology, 47(3), 653-656.
Smeesters has a total of six papers listed in PubMed, and 23 listed in Thomson Scientific’s Web of Knowledge. According to the latter, his most-cited paper is 2003’s “Do not prime hawks with doves: The interplay of construct activation and consistency of social value orientation on cooperative behavior,” published in the Journal of Personality and Social Psychology and cited 57 times. One of his recent papers was on the effectiveness of cosmetic advertising.
Smeesters does not appear to have worked with Stapel, but one of the papers Smeesters is retracting shares a co-author, Camille Johnson, with a Stapel paper subject to an Expression of Concern. Again, the Erasmus report said there is no reason to doubt the work of Smeesters’ co-authors, and last month, Leonard Newman told us that Basic and Applied Social Psychology is not likely to retract the Stapel-Johnson paper:
The Johnson article has not been retracted yet because the committees have not found this article to be fraudulent. If they do, we will retract the article. But based on (1) the fact that a number of papers Stapel published with Johnson have already been investigated, and (2) informal communication with the co-author involved, I do not anticipate another retraction.
Here’s Smeesters’ bio from his Erasmus homepage, which has been taken down:
DirkSmeesters is a Professor of Marketing at the Rotterdam School of Management, Erasmus University, the Netherlands, where he teaches courses on Marketing Management and Experimental Methods. He received his BA, MA, and PhD in Psychology from the Katholieke Universiteit Leuven, Belgium. His research on unconscious influences on human perception and behavior, the psychology of money, social comparison, and mortality salience has been published in the leading academic marketing and psychology journals, such as the Journal of Consumer Research, Journal of Marketing Research, Journal of Personality and Social Psychology, and Psychological Science. He serves as an Associate Editor for the Journal of Consumer Research and the International Journal of Research in Marketing. His research has been covered by media including the Wall Street Journal, New York Times, Time Magazine, Business Week, ABCNews.com, national radio channels (in the Netherlands, Belgium, USA), and various local news papers.
Thanks to a number of Retraction Watch readers who flagged this item for us.
A comment in the report should make us anticipate more retractions: Smeesters claims the culture in his field is such that he is certain that many do the same as he has done: remove inconvenient data to obtain statistical significance.
There should also be a third paper that will be retracted, as the report states that Smeesters admitted to this data manipulation on three papers.
Sorry for duplicating much of what Marco has said, I think we were writing our comments simultaneously.
yep, Smeesters http://pear.ly/bsav1 now part of University Inc. http://pear.ly/2qZ1
There is a couple of interesting things about this case. First, the uni website mentions two articles that have been retracted but when you read the report (can be requested at their press office but is in dutch) you will notice that they talk about at least three articles were fraud is obvious. Second, the public version of the report has been redacted very seriously. So much that the methodology used to investigate the case has not been reported. However, apparently somebody has been developing a statistical test, which can attach a “truth-score” to experimental papers. I would be interested if any other retractionwatch readers know about such a tool? Finally, while retractionwatch seems to have been able to convince the uni to release the titles of the papers, my inquiry was rebuffed with “we do not disclose that in order to protect the co-authors”.
On a different note, this case highlights once again a key shortcoming of experiments in social sciences. Nobody seems to be keeping up lab-diaries. Otherwise it would be much trickier to get away with these things. Naturally, this case highlights that Stapel was just the tip of the ice-berg there is much more to come.
“On a different note, this case highlights once again a key shortcoming of experiments in social sciences. Nobody seems to be keeping up lab-diaries. Otherwise it would be much trickier to get away with these things.”
I disagree. How would lab diaries have prevented this? I suppose one could look into the lab diary, and notice that there were X participants more tested, than were reported in the paper. But it would be easy to get around that (you could go through the lab book and add a note “written at the time of data collection” explaining why that data point was crap and should be ignored.)
Thanks, what I meant to say is that with a lab-diary it would be harder, though not impossible. Instead of just having the X participants, one could record all the outcomes of the experiment and require the scripts used to do the analysis to start with the raw experimental outcomes. Any data “cleaning” is then part of the script and can be seen as part of the submission process.
On the issue of the “truth score” though, this is very interesting…
Unfortunately a lot of the key bits seem to be blacked out but on page 7 second paragraph there’s a bit where they (I think) say that out of 22 variables, you’d expect 1.1 to give a positive result p=0.05 by chance, but they actually found 6/22 significant.
However I don’t read Dutch so I can’t work out what those 22 variables are, could someone translate?
They are 22 dependent variables from 22 experiments. These are supposedly found in appendix 3 (not included). The 6 out of 22 suggests a pattern of spreads that are too small, but it is impossible to point at individual variables. Not sure how the FDR correction works, but it can be used to identify individual dependent variables that are problematic. Crucially three out of the 22 then score below 5% and these are the three articles that this committee advises the university to retract.
Thanks very much. That’s fascinating, looking forward to finding out more about this technique!
An english version of the report was made available (please see link http://www.eur.nl/english/news/detail_news/article/38621-university-withdraws-articles/). The method used seems to build on the idea of false discovery rate (FDR) control.
“Smeesters claims the culture in his field is such that he is certain that many do the same as he has done: remove inconvenient data to obtain statistical significance.”
Do you think he means in the field of Science generally or just social science?
Well, whatever he means, it’s true for science generally.
He specifically refers to marketing and to an extent social psychology when making this remark.
John et al. (2012, Psychological Science) studied the culture that Smeesters was referring to:
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1996631.
The major difference is that Smeesters excluded participants WITHOUT mentioning it in the paper. In the retracted paper in JESP the degrees of freedom from his F tests do not align with the stated sample size (DF stated as 157 should have been DF= 163) and so a good reviewer could have spotted the exclusion of data. Incidentally, he reports lot of incorrect p-values too; looks like he does one-tailed testing without mentioning that either.
Thanks for the link. That’s a fairly scarey paper.
@Jelte – differences in DF can also stem from missing data on (dependent) variables (in case of ANCOVAs, e.g.,); so this is not necessarily an indication of fraud/malpractice.
Yes Frank342, you’re absolutely right. Inconsistencies like these are quite common in the literature and they do not enlarge my trust in the findings regardless of whether they are the result of honest error or dishonest conduct.
And how did it get approved and published then? What exactly were the peer reviewers doing?
Social psychology is a rotten discipline overrun by leftist ideologues. Having taken a few “social science” courses at my undergraduate institution I say they can be done away with entirely without any detriment to the advancement of science.
It’s hard for me to observe political slant of any type in the retracted and other works of Stapel and Smeesters that have been described here. Jargon-ridden and impenetrable, yes, although perhaps not much more so than my papers in biophysics might appear to a nonspecialist; “leftist,” I see no evidence for within the discussion at RW. Not sure where you’re going with that comment.
Nice demonstration of why social psychology is necessary. How in the world can you overgeneralize from your tiny, biased sample, and still believe your own conclusion? I took college chemistry, too, do you think I can make the sweeping generalizations about your field on the basis of my sample? You’re ignorant, and you should know it, and stick to what you actually know.
This is difficult to encapsulate in a short comment, but NorthwesternChemist has a point — as do you, I hasten to add. One basic assumption of Old Liberalism (today’s right wing) is that people are relatively rational and autonomous. That’s inconsistent with the research direction of social psychology, which seeks the irrational and social determinants of behavior. Any psychologist who concentrates on rational and autonomous processes is likely to call himself a cognitive, rather than social, psychologist. That doesn’t make social psychology leftist, but it does put the field at odds with rightists (and, of course, vice-versa for the cognitive folks). This has nothing to do with statistics. It’s a matter of focus and philosophy.
I am not convinded by this comment. What I know, however, is that the publication pressure in social psychology is immense. The worst thing is that journal editors favour “sexy results” rather than sound methodology. The quality of scientific work must be judged irrespective of the reesults. if the work is done properly, EVERY result should be informative and advance science.
I recommend Mary Midgley’s ‘The Myths We Live By’ to all who start taking their field of scientific investigation more serious than those of others. Most of us are playing in our own comfortable playgrounds. Only few, very few of us are actually capable of advancing science.
Regarding Camille Johnson, indeed, the ‘Harnessing’ paper published in BASP presumably is not fraudulent. However, 6 out of 7 papers she co-authored with Stapel have been examined as fraudulent…
Guilty by association is not a proper way to go, but I think it’s quite peculiar to have worked with both Stapel and Smeesters. Moreover, the same holds for Debra Trampe from the University of Groningen. Her co-authored paper with Smeesters, however is not analyzed yet.
By the way: the third fraudulent paper was a working paper, which wasn’t submitted yet, so it couldn’t be retracted.
I am a collaborator of Camille Johnson’s. Part of the reason she is so collaborative is that her own institution doesn’t afford her many data collection opportunities. She has many collaborators, only two of which (from the same ultra-competive Dutch academic culture) were suspect.
I am a social psychologist who has also published with Camille Johnson, and I wanted to clarify that she is NOT responsible for any of this. We have run several studies together, and she not only produces all of the raw data when I ask, but also exercises the highest level of academic integrity and thoroughness in writing up the results. This is an unfortunate byproduct of her having collaborated with two different people who happened to do research in her area (social comparison). Guilty by association is a convenient heuristic, but it doesn’t apply here. Please don’t raise such questions – which could jeopardize the career of an innocent young scholar – without actual evidence.
As you said, “guilty by association is not a proper way to go,” and while I see the urge to make such judgments, I would encourage everyone to refrain from doing so. I have known Camille Johnson for 12 years and can attest to her integrity and conscientiousness. I remember it was during her graduate student years that we went to the Netherlands, a hotbed for social psychology, on her own accord to forge new intellectual collaborations – which is highly commendable, I must add. She began collaborating with Stapel then, and later Smeesters. In both cases, there was no reason for her (or any other Stapel or Smeesters collaborators) to suspect they had been handed fraudulent data. Let’s also remember that in both cases, thorough investigations have absolved all co-authors of any guilt. Michael Olson also bring up a good point that only two of Camille Johnson’s many collaborators have committed data fraud.
Now, let’s put ourselves in her shoes – a young, pre-tenure professor and an honest scientist. In less than one year, you see seven of your publications scrubbed from your CV through no fault of your own. Let’s not add insult to injury by making unfair assumptions about her guilt or “blame the victim.” If anything, she, as well as other co-authors, deserve our compassion, especially at a time like this.
But what happened to the “if something is too good to be true, it probably is” principle? It would be great to know how many times CJ was given raw data by the SS duo and the said data turned out NOT to support the initial hypothesis.
Unfortunately, the publication process is biased in such a way that null results tend not to get published. As a result, such data tend to end up in the “file drawer” and there would be no reason for anyone to share those results with anyone. That is, of course, a problem in and of itself, but that’s a whole different topic of debate, and I don’t think this is “proof” of any sort of wrongdoing.
Again, let’s stick to the facts. As another commenter said, it would be irresponsible to jeopardize someone’s career over unsubstantiated allegations.
Stapel has been very prolific, so I do not think his hypotheses have been proven incorrect by data too often. If it is not a red flag, I do not know what is, unless people also believed that Staple could walk on water.
My working hypothesis is that many people had nagging suspicions about the validity of the data provided but chose to suppress them because it was in their interest to do so.
SF, I’m not sure who you are. But it seems like you make a sport out of making allegations by hiding behind anonymity. How ballsy.
I don’t know Cammie Johnson personality, so I’ll leave it to her friends to attest to her honesty. It doesn’t take more than a drop of common sense to see that she was unlucky in her choice of two coauthors. She had no way to know this sad fact about them, any more than the rest of the field did, so LAY OFF.
However, I do intimately know two other people who were burned by Dirk Smeesters: one is my colleague and friend Christian Wheeler (his name isn’t secret, as it’s published on this site). The other is me, Jonathan Levav (until this moment, unpublished). Christian has five papers with Dirk; I have two, both of which are unpublished manuscripts that were invited for revision at the Journal of Consumer Research. Neither one of us ran the questionable studies in these papers, and neither one of is us guilty. We’re associated with Dirk as coauthors, but we’re not guilty. You might find this “peculiar”–that we’re not guilty–but those are the facts. I’m not afraid to say this and I have nothing to be ashamed about, so I don’t have to hide behind an anonymous initial.
For all you curious types on this site, and to Mr./Ms. SF, mine is the third paper that was redacted from the report. It’s called Seeking Freedom Through Variety, and apparently it includes two studies–both of which Dirk ran–that are questionable (in fact, he ran all of the studies on our papers together, but only two are in question).
Let me explain what I mean by questionable: It looks like the means and standard deviations we report are unlikely to have come from a random sample. I didn’t know this until an analysis was shared with me some time ago in which data using the parameters we reported was simulated and showed that our specific pattern was unlikely. To be specific, if you look at all of the studies that are reported in my paper with Smeesters (total of five studies), the probability of the results coming from a random sample is about 1 in 270. There will be a paper coming out on the method used to find this, and I’ll leave it to that talented author’s paper to explain the technique. It’s a shockingly sensible technique, by the way, and I predict it will be a highly impactful paper when it is published.
Some of you might wonder, how did we not know that something was up? The answer is that it’s not that easy to spot a coauthor who is doctoring data. The variety seeking paper, for instance, started in a delightful conversation that I had with Dirk when I visited Erasmus. Dirk mentioned a finding on social exclusion that he had; I had an interest in why people seek variety. We came up with what we thought was an interesting hypothesis to test that related to previous work on variety seeking, some of which is my own. Dirk is a nice, intelligent guy, and was an enthusiastic coauthor. He was a good critic of research. He was respected in the field. He also was at Erasmus, which has perhaps the best behavioral lab I had ever seen. So when the data streamed every few months, it was hardly suspicious. Unlike Stapel, Dirk actually ran studies. What he did with the data afterward is what’s in question.
So there you have it, SF, and all the rest of you voyeurs, haters of social psychology, great scholars, crappy scholars, cool people, losers, innocent bystanders, or whoever the fuck is still reading. You have your missing suspect paper, with another researcher to add to the mix of affected (infected?) names.
To all the readers on this site–those who have ranted and those who haven’t–you might want to consider a few things. Dirk Smeesters was a friend to many of us, a very nice guy. Maybe he’s not a friend any longer, but he was for some time. He has a family, and he’s paying a heavy price. Although this is probably deserved, it’s sad for many of us to watch. And although for many of you this whole incident provides much needed entertainment, for those of us caught up in it, the situation has been extremely distressing. I shudder to think how Cammie feels right now; most people would just quit the field in her shoes. Personally I’m over this whole thing now–it’s been a few months since I’ve known–so I feel free to write about it and have my writing cached in cyberspace for posterity (probably a terrible idea). I don’t know what motivated Dirk to do what he did, but I do know that he didn’t have to do it because, in reality, he was smart enough to be a respected scholar without doctoring data. This whole situation plain sucks.
SF, you coward in hiding, before you publicly speculate about people’s careers and judgment, take a deep breath and ask yourself who the hell you think you are to so freely besmirch people’s reputation in a public forum. If you have something to say, stand behind your name. Your NAME. You know Camille Johnson’s name. You know mine.
Jonathan Levav
Stanford University
“I don’t know what motivated Dirk to do what he did, but I do know that he didn’t have to do it because, in reality, he was smart enough to be a respected scholar without doctoring data. This whole situation plain sucks.”
Well, many of us can only guess. You seem to know mr. Smeesters professionally and personally, if you don’t why he falsified data, it’s for the rest of us to guess why he did.
I think it was a need to score in a politicised environment. What’s your guess?
Thank you, Dr. Levav, for so eloquently expressing a viewpoint that I’m sure resonates with many of us social psychologists. This is a tragic time indeed. I not only work with Cami on several projects, but I also co-authored two (separate) papers with Dirk. Neither paper is central to my current research ,and the last was published in 2009… however, I feel for my colleagues and others who are more directly affected. Despite the disturbing and potentially demoralizing aspects of the whole ordeal, it is reassuring that people like you, Clara, and Mike are so quick to defend the field as a whole, as well as specific individuals in it.
Kim Rios (in the spirit of eschewing anonymity)
Dear Jonathan: THANK YOU. Your impassioned response brought tears in my eyes. I wish you, Christian (also a friend of mine), Cami, and everyone else caught up in this nothing but the best.
-Clara Cheng
Thank you very much for your elaboration, Jonathan. Unfortunately, I’m not your typical social psychology basher, so please reread my comment. Then you may conclude that, while my comment is indeed short, it’s not oversimplified or wrong and I do think I’m asking a relevant question.
Also I do wonder how you got so severely negatively primed as you indicate that I ‘make a sport out of making allegations’, while in fact this is my first post on this forum…
In the Stapel case (and to a lesser degree also the Smeesters case) the statistical errors were not overly complicated to assess. Especially as a first author you should have your statisctics straight. A first author who hasn’t collected their own data should be even more cautious. And shouldn’t there be any responsibility for co-authors at all? Do we really expect reviewers to dig into all statistical issues? I think that’s turning the world upside down. Science is based on trust, but that doesn’t exclude collaborators from their responsibilities to produce decent, methodically correct, research papers.
And sorry to say, but I don’t think a research field for which it is apparently ok to say that ‘only two collaborators have committed data fraud’ is a particularly healthy situation.
While coining ‘guilty by association’ I should have immediately pointed out that obviously my observations are quite easy with the ‘wisdom of hindsight’. Nevertheless, while this situation is a full tragedy on the personal level, I do think it is legitimate to ask questions on the professional level. The declaration of Smeesters that ‘almost everybody is doing this in his research field’ is not really helpful for winning back confidence…
By the way, Stapel also ran lots of studies (at least in the beginning of his carreer).
ps1. I don’t have a stake in this case, so I don’t see the added value of revealing my identity. I’m just one of the 7 billion citizens of this world…
ps2. as you are personally involved I do very much respect the fact that you discuss your difficult situation so openly. I think that should be the (only?) proper way to handle such cases.
“So there you have it, SF, and all the rest of you voyeurs, haters of social psychology, great scholars, crappy scholars, cool people, losers, innocent bystanders, or whoever the fuck is still reading.”
One of the most remarkable comments I have read in any scientific online debate. Still I think it only illustrates the passionate approach of authors towards their data and results. It is the mission of Science to detect false information and wipe it away asap. There should be not emotion in this. Actually retractions and corrections should happen all the time, everyday. Once this becomes the rule, and this blog is doing a hell of a job making the process trivial, maybe scientists will become less attached to their data and less willing to commit fraud. Keep your heads up and keep going, everyone makes mistakes, it is just that not everyone is caught or exposed.
I truly feel for those co-authors (especially Dr. Johnson) who got the shaft here. As a doctoral student, this scares the heck out of me.
1 in 270 is not that unlikely, really.
I’ve never understood the obsession in social psychology for showing that tiny, seemingly insignificant environmental manipulations can completely change someone’s beliefs and behavior. Stapel and Smeesters certainly pushed this view hard. When this view is challenged, the results can be scathing personal attacks (e.g., http://blogs.discovermagazine.com/notrocketscience/2012/03/10/failed-replication-bargh-psychology-study-doyen/). Why do people want this to be true so badly?
Because the environment isn’t very important.
Talents are not distributed evenly and in a capitalistic meritocratic society, those without will be disappointed and left behind. Yes, they will suffer more. This awful truth instills so much despair among socially engaged intellectuals that many — social psychologists among the front lines — have taken it upon themselves to prove it un-true. Those who can give the slightest signs of hope that, indeed, environment does matter, are given the spotlight — fame, attention, power, prestige.
But I guess you were rhetorical…
It wasn’t always so. Many of the most important studies in psychology were done by social psychologists in the 50s and 60s – Milgram, Zimbardo… but you couldn’t do anything so interesting today because it wouldn’t get ethical approval 😉
My own belief is that Milgram probably massaged either his data, his protocol or both. I saw some of the film footage he produced and all participants looked like they were acting to me,
After all, if he produced a study “Most people refuse to deliver lethal electric shocks” it wouldn’t have been nearly helpful to his career as the producing the reverse.
Science is a field where virtue goes very much unrewarded.
On the same subject, these critiques have been around along time (Orne and Holland, 1968)
“Beside the myriad technical problems, even if we were to assume that everybody played his role to perfection, the experimental procedure itself contains serious incongruities. The experiment is presented as a study of the effect of punishment on memory. The investigator presumably is interested in determining how the victim’s rate of learning is affected by punishment, yet there is nothing that he requires of the S (teacher) that he could not as easily do himself. Those Ss who have some scientific training would also be aware that experimental procedures require more care and training in administering stimuli than they have been given. The way in which the study is carried out is certainly sufficient to allow some Ss to recognize that they, rather than the victim, are the real Ss of the experiment.
The most incongruent aspect of the experiment, however, is the behavior of the E. Despite the movie image of the mad scientist, most Ss accept the fact that scientists — even behavioral scientists — are reasonable people. No effort is made to emphasize the world-shaking importance of the learning experiment; rather it is presented as a straightforward, simple study. Incongruously the E sits by passively while the victim suffers, demanding that the experiment continue despite the victim’s demands to be released and the possibility that his health may be endangered. This behavior of the E, which Milgram interprets as the demands of legitimate authority, can with equal plausibility be interpreted as a significant cue to the true state of affairs — namely that no one is actually being hurt. Indeed, if the S believes that the experiment is a legitimate study, the very fact that he is being asked to continue a relatively trivial experiment while inflicting extreme suffering upon his victim clearly implies that no such suffering or danger exists.
The incongruity between the relatively trivial experiment and the imperturbability of the E on the one hand, and the awesome shock generator able to present shocks designated as “Danger — Severe Shock” and the extremity of the victim’s suffering on the other, should be sufficient to raise serious doubts in the minds of most Ss. ”
But the fact is we love the conclusions of the Milgram experiment ever to want to let them go.
Nonetheless a sufficient clue to possible data massage that not one of Milgram’s Ss appear to have queried these fairly basic incongruities.
Perception bias: Why every photograph in an article on fraud looks like a mugshot.
Although I fear what is to come in the field of social psychology, I’m excited by the idea that these stories might bring forth a positive change in this field. Reporting non-significant results seems impossible and we somehow like to believe in this fable that non-significant results are the consequence of bad methodological practice. I get the feeling that topics like this are getting more attention in the field of social psychology, which I eagerly welcome.
I too have absolutely no stake in this event, and I am not always a fan of Levav’s rather brash style, but his response here is very well stated and perfectly appropriate. In fact, I’d dare say that he was understated.
Shame on you, SF, for continuing to hide while spreading offensive careless comments about junior faculty whose reputation could be badly damaged by your careless allegations.
I believe any allegations can be made about any aspect of scientific investigation, and that any question should be taken into consideration. The attack should, however, be always directed to the manuscript being discussed, or their foundations. This is not only impersonal, but in fact is done annonymously, let us remember the true nature of peer-review. Science needs open criticism to keep evolving. In a healthy environment where sharp criticism is common, unfounded allegations are rapidly dismissed as to focus attention on truly important matters. The rest is public relations and politics, which are outside the scope of Science and advancement of knowledge.
Thanks for your post. Yesterday, Dutch tv news reported the professor has resigned. He admits “sexing up” results – as someone called it here, but denies falsifications.
Today, Dutch web news sites report the findings of at least 2 papers could not be supported. In total, there are 3 suspect papers out of nearly 30 papers. Two were retracted, one was not yet published.
What does not help: it seems all the other papers, results, finds, data – whatever was of importance to his research… can no longer be found. Said professor claims this is because his pc crashed.
What is so deplorable is, that a case like this one has a backlash not only involving the field, the university, colleagues but also students. It is also deplorable, that this is not the first Dutch professor accused of fraude or plagianism, but third one in a recent row. After the recent frauds in Dutch diploma’s, these cases involving professors is really a drama Dutch universities can well do without.
Smeesters did not just sex up his data. From my understanding his data showed flagrant and rampant fraud. He appears to have lost ALL the data he ever collected (no paper or electronic versions), which in and of itself is very suspicious. He is trying to justify his own behavior by throwing marketing and social psych under the bus.
The identity of the researcher that iniated the investigation of the Smeesters papers is Uri Simonsohn. The results are in a working paper “Finding Fake data: Four True Stories, Some Stats, and a Call for Journals to Post All Data”. See updated post http://www.eur.nl/nieuws/detail/article/38616-universiteit-trekt-artikelen-terug/ The link leads you also to a new, full (Dutch) version of the investigation report.
In my opinion – having been involved in social psych research – the major problem is that social psych/marketing researchers have only a few major outlets in which to publish. And it are these journals that have shifted towards publishing surprising, high media-potential, small effects. I have seen the Journal of Consumer Research, which was once a very marketing-oriented journal, move further and further away towards a experimental psych journal. For social psych/marketing researchers this JCR is – arguably – highest ranked. Editors and (to a lesser extent) reviewers have pushed research into a direction of finding ever more counter-intuitive effects without any concern about practical significance. In many marketing departments it is a well-known joke what it takes to publish in JCR (study 1: main effect, study 2: mediator, studies 3,4,5: moderators). It is nearly impossible to get published there with a study that builds on previous findings and adds a small insight: for instance, study 1: replication of previous findings, study 2: replication of previous findings + condition with moderator, study 3, 4, 5: finetuning of moderator etc. What editors are looking for is only new effects that GO AGAINST previous findings (not confirmation + finetuning of previous findings). As such, the field moves into looking for ever more weird, crazy, far-fetched ideas. While such ideas might be interesting to discover new pathways the development of the field, they should be a niche. The core of the field should be to further develop ideas that are already accepted and fine-tune them. This change of focus will allow for higher-quality field, will make the developed ideas more generally accepted, increase honest and high-quality research, and decrease the pressure of finding highly unlikely, yet astonishing effects (which fosters data manipulation to produce these highly unlikely effects). Science is not about rock stars, it is about jointly discovering general principles.
Kristoffer. One of the most insightful posts amidst the puerile din here.
I’ve been trying to reconstruct Simonsohn’s methodology from the meager information available in the (Dutch) report from Erasmus University Scientific Integrity Committee (CWI: commissie wetenschappelijk integriteit). Here are my initial findings, including R code so that more people can do more experiments, or tell me if mine are stupid; and critique of the statistical methodology of the Erasmus-CWI report:
http://www.math.leidenuniv.nl/~gill/#smeesters
Simonsohn’s idea does seem to work as a rough investigative tool. But it looks to me like Smeesters was subjected to medieval torture and confessed. And it seems like he did indeed have something serious to confess to, since all of his data is missing and no-one else ever saw any of it (including his co-authors, assistants, students, …).
Dr. Gill on his website asks: “Next, we do not know how Simonsohn got onto Smeesters’ tail. Was he on a cherry-picking expedition, analysing hundreds of papers, and choosing the most significant outcome for the follow-up?” My understanding of Dr. Simonsohn’s did analyze hundreds of papers. I have heard he entered all the stats over the course of a year or two from a number of social psych and marketing publications and looked for anomalies and was particularly interested in researchers who had this anomaly appear multiple times. I also heard that when he saw a particular paper that he believed were too good to be true he would then inspect that paper for this pattern. That appears to be the case with Smeester’s JESP paper on color. The data are remarkable given the manipulation — the same primed words had dramatically different effects on performance on an intelligence test if the materials had simply been handed out in different color folders. That is the only manipulation, the color of the folder, and it moderated the influence by the prime words by 30% across many different conditions.
Dr. Gill describes the technique as catching people who reduce variance by eliminating extreme values. But I have heard that Dr. Simonsohn suspects his technique is particularly valuable for catching people like Stapel who make up their data by simply adding a constant from one condition to the next — that would produce mean differences with similar variances across conditions. Smeesters has said, “I am no Stapel” but he can’t produce ANY of his raw data. Seems a lot like Stapel to me.
I agree completely with Dr. Gill that Dr. Simonsohn;s technique is useful but the danger of a false positive is VERY high. I believe that Dr. Simonsohn only went after people who showed this pattern multiple times. I think the burden of proof must be very, very high before making an accusation.
I find it fascinating that so many comments on this article have bashed the social sciences when numerous previous posts on this blog have addressed data fraud and research abuses in the medical sciences. For example, recent entries have discussed Fujii (anesthesia) with 172 retractions, Boldt (anesthesia) with 90 retractions, and Potti (oncology) with 10 retractions. I don’t recall seeing comments on these posts about the rampant fraud and lack of scientific merit of these fields of study. Pressure to publish and data manipulation is certainly not limited to the social sciences.
The interest in social psychology, as a field, is probably due to a combination of two things. First, Stapel, Smeesters, and the investigators of both, have all placed a certain amount of weight on the culture of social psychology from the beginning. The issue has been raised with respect to anesthesiology, for example, but no one seems to have good ideas on how the culture of academic anesthesiology might have contributed to the Boldt & Fujii matters. Second, the Stapel and Smeesters matters come close together in time and space, suggesting that something may be up. That’s hardly proof of anything, but two very unusual events within one field in one year in one small section of Europe — its bound to raise eyebrows.
I need to disclose that I am one of the lunatic fringe for whom social psychology has always raised eyebrows. Not social sciences generally. Just social psychology. However, I think I’m in a very small minority here. Most commenters are simply reacting to the two factors mentioned above; and I don’t think the field is in much danger from 3-sigma outliers such as myself. We can be ignored — a la Smeesters.
Obviously a sad situation. As an ethologist and brain / behavior scientist I took a side track and wondered about the ethology of deception along with the ethology of outrage when we feel or are deceived. Maybe this is a direction for future social psychology research, maybe not. But the circumstances do suggest avenues that future research might look in to. From my perspective we are all driven largely by deep historical roots that can lead some to deceive and most of us to be sensitive to deception, with strong negative consequences. This dual propensity can be found in a number of animal species; the roots are deep. I am glad this is coming into public airing and conversation.
John C Fentress, PhD
Eugene, Oregon