About these ads

Retraction Watch

Tracking retractions as a window into the scientific process

University of Michigan psychologist resigns following concerns by statistical sleuth Simonsohn: Nature

with 11 comments

A second psychology researcher has resigned after statistical scrutiny of his papers by another psychologist revealed data that was too good to be true.

Ed Yong, writing in Nature, reports that Lawrence Sanna, most recently of the University of Michigan, left his post at the end of May. That was several months after Uri Simonsohn, a University of Pennsylvania psychology researcher, presented Sanna, his co-authors, and Sanna’s former institution, the University of North Carolina, Chapel Hill, with evidence of “odd statistical patterns.”

Simonsohn is the researcher who also forced an investigation into the work of Dirk Smeesters, who resigned last month. Last week, Yong reported that Simonsohn had uncovered another case that hadn’t been made official yet.

According to today’s story, Sanna has asked the editor of the Journal of Experimental Social Psychology — which is also retracting one of Smeesters’ papers — to retract three papers published from 2009 to 2011. These are the three he seems to have published there during that time:

The resignations of course follow that of Diederik Stapel, another psychology researcher.

Read Yong’s full report here.

About these ads

Written by Ivan Oransky

July 12, 2012 at 4:14 pm

11 Responses

Subscribe to comments with RSS.

  1. That really is an odd pattern of significance:

    “When thoughts don’t feel like they used to: Changing feelings of subjective ease in judgments of the past”:

    Every significant effect was “p < .001".

    "Rising up to higher virtues: Experiencing elevated physical height uplifts prosocial actions":

    Every significant effect was "p < .001".

    "Think and act globally, think and act locally: Cooperation depends on matching construal to action levels in social dilemmas":

    Every significant effect was "p < .01".

    Where are all the p = .059's that I'm always cursed with?

    failuretoreplicant

    July 13, 2012 at 8:40 am

    • Rising stars get p<.001! That's why they are rising, while the rest of us losers usually get p=.059! :)

      Jon Beckmann

      July 13, 2012 at 9:53 am

      • p=.059 is plain rubbish.
        p=.0501 should be the number when you may sincerely be fed up with your data…

        SF

        July 13, 2012 at 11:12 am

    • exactly, think about all the published articles with p<0.00000000000000000000000001

      July 26, 2012 at 1:43 pm

  2. The first one is reminiscent of this paper: http://www.ncbi.nlm.nih.gov/pubmed/20424062 , same year.
    Another Psychological Science fluff paper of the same kind (“Elevation leads to altruistic behavior”).

    Jon Beckmann

    July 13, 2012 at 9:52 am

  3. Re: the papers, yes, it makes sense that those would be the three, and two of them are the ones that Simonsohn investigated. Cooper is currently travelling and away from his records, so he couldn’t confirm the exact papers to be retracted.

    Ed Yong

    July 13, 2012 at 1:12 pm

  4. I have to agree with your “Another Psychological Science fluff paper of the same kind” because the titles of all these papers (Stapel included) make them sound like “well, duh!” Hard to avoid the suspicion that he/they deliberately made up trivial studies for which to manufacture data.

    Conrad T Seitz MD

    July 13, 2012 at 3:01 pm

  5. Here is another real aspect of this, sadly, ongoing problem. I was an editor back when one of the above author’s brief reports crossed my desk. While I no longer recall the details (it was in the early or mid 1990s) I do remember being impressed at how clear the results were, and I accepted the paper almost “as is” (some minor quibbles, but nothing serious).

    Now, that paper may have been just as good as it appeared. But in light of current concerns, everything becomes tainted by association — leaving investigators interested in a particular topic no idea how to proceed.

    This is all quite sad for psychology. I hope the problems are addressed and issues redressed thoroughly; and that ultimately due diligence will lead to better understanding and knowledge of what is among the most fascinating topics — how and why we behave and think (I am not a behaviorist — thus the distinction) as we do.

    s klein

    July 20, 2012 at 5:54 pm

    • The main problem in psychology is not so much data fabrication (though that may be the case for social psychology). The main problem is selective reporting. There are a ton of “classic” effects that are cited in textbooks and all that simply DO NOT REPLICATE. That is because the original articles reported the results of a very specific set of experimental parameters under which the cool results were obtained. However, the original articles misleadingly portrayed the results as much more general and robust than they really are. Why did they do that? Well, career reasons, obviously: Saying that the cool results would only be found for a very narrow range of parameters would render them much less interesting and general.

      Jon Beckmann

      July 25, 2012 at 9:31 am

      • Agreed. That certainly is one aspect of a complex set of issues. Is it unique to psychology? I doubt it. Is it more prevalent in psychology? Probably.

        The specific issue with this particular problem is that graduate students are not instructed in the nature of coherent argument and the art of drawing permissible conclusions from a set of empirical findings. Thus, they generalize lab-based outcomes beyond (often way-beyond) comfort zone. This in turn occurs because psychology instructors also often are lacking when it comes to the logical or argument and inference vis a vis experimental results (or for that matter, reasons to examine a question.)

        I do not think this regrettable situation is unique to psychology, but, in conjunction with the expedient to publish or perish, we do tend to turn out an uncomfortably large number of “well-educated drones” who follow the edict of” find, find find” — irrespective of the broader content and greater relevance of those findings.

        While such also is true in other fields of science, their paradigms and methods are better established, more clear, and have a general theoretical guiding over-structure that partially masks these pervasive drawbacks.

        What to do? Well, in the case of psychology (the only areas for which I can speak semi-coherently) a strong dose of the philosophy of logic and scientific argument would certainly be some help in understanding whether one’s findings refer to “nature at her ‘interesting’ joints” or rather to a particular task.

        s klein

        July 25, 2012 at 11:30 am


We welcome comments. Please read our comments policy at http://retractionwatch.wordpress.com/the-retraction-watch-faq/ and leave your comment below.

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 34,943 other followers

%d bloggers like this: