Retraction Watch

Tracking retractions as a window into the scientific process

“I placed too much faith in underpowered studies:” Nobel Prize winner admits mistakes

with 8 comments

Daniel Kahneman

Although it’s the right thing to do, it’s never easy to admit error — particularly when you’re an extremely high-profile scientist whose work is being dissected publicly. So while it’s not a retraction, we thought this was worth noting: A Nobel Prize-winning researcher has admitted on a blog that he relied on weak studies in a chapter of his bestselling book.

The blog — by Ulrich Schimmack, Moritz Heene, and Kamini Kesavan — critiqued the citations included in a book by Daniel Kahneman, a psychologist whose research has illuminated our understanding of how humans form judgments and make decisions and earned him half of the 2002 Nobel Prize in Economics.

According to the Schimmack et al blog,

…readers of his [Kahneman’s] book “Thinking Fast and Slow” should not consider the presented studies as scientific evidence that subtle cues in their environment can have strong effects on their behavior outside their awareness.

Remarkably, Kahneman took the time to post a detailed response to the blog, writing:

What the blog gets absolutely right is that I placed too much faith in underpowered studies. As pointed out in the blog, and earlier by Andrew Gelman, there is a special irony in my mistake because the first paper that Amos Tversky and I published was about the belief in the “law of small numbers,” which allows researchers to trust the results of underpowered studies with unreasonably small samples. We also cited Overall (1969) for showing “that the prevalence of studies deficient in statistical power is not only wasteful but actually pernicious: it results in a large proportion of invalid rejections of the null hypothesis among published results.” Our article was written in 1969 and published in 1971, but I failed to internalize its message.

We contacted Kahneman, who confirmed that he indeed posted this response on the blog Replicability-Index. It’s commendable that someone of his stature would take the time to thoughtfully acknowledge and respond to criticisms of his work in such a transparent way. (It’s earned him praise from Columbia statistician Andrew Gelman, as well.)

The blog — which will make the most sense to people versed in statistics — is about Kahneman’s citations of research on “priming,” in which the memory of something can unconsciously influence a person’s behavior going forward.

As the authors note in their blog about Kahneman’s work:

In the beginning of 2012, Doyen and colleagues published a failure to replicate a prominent study by John Bargh that was featured in Daniel Kahneman’s book.  A few month later, Daniel Kahneman distanced himself from Bargh’s research in an open email addressed to John Bargh.

So the researchers decided to rate the studies Kahneman cites according to their power, and a measure known as the “R-index,” described here. As Schimmack, Heene, and Kesavan write in their blog:

To correct for the inflation in power, the R-Index uses the inflation rate. For example, if all studies are significant and average power is 75%, the inflation rate is 25% points.  The R-Index subtracts the inflation rate from average power.  So, with 100% significant results and average observed power of 75%, the R-Index is 50% (75% – 25% = 50%).  The R-Index is not a direct estimate of true power. It is actually a conservative estimate of true power if the R-Index is below 50%.  Thus, an R-Index below 50% suggests that a significant result was obtained only by capitalizing on chance, although it is difficult to quantify by how much.

They concluded:

The results are eye-opening and jaw-dropping.  The chapter cites 12 articles and 11 of the 12 articles have an R-Index below 50.  The combined analysis of 31 studies reported in the 12 articles shows 100% significant results with average (median) observed power of 57% and an inflation rate of 43%.  The R-Index is 14.

Kahneman responded:

The argument is inescapable: Studies that are underpowered for the detection of plausible effects must occasionally return non-significant results even when the research hypothesis is true – the absence of these results is evidence that something is amiss in the published record. Furthermore, the existence of a substantial file-drawer effect undermines the two main tools that psychologists use to accumulate evidence for a broad hypotheses: meta-analysis and conceptual replication. Clearly, the experimental evidence for the ideas I presented in that chapter was significantly weaker than I believed when I wrote it. This was simply an error: I knew all I needed to know to moderate my enthusiasm for the surprising and elegant findings that I cited, but I did not think it through.

He concluded:

I am still attached to every study that I cited, and have not unbelieved them, to use Daniel Gilbert’s phrase. I would be happy to see each of them replicated in a large sample. The lesson I have learned, however, is that authors who review a field should be wary of using memorable results of underpowered studies as evidence for their claims.

Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our daily digest. Click here to review our Comments Policy. For a sneak peek at what we’re working on, click here.

Written by Alison McCook

February 20th, 2017 at 12:35 pm

Comments
  • Terence Murphy February 21, 2017 at 2:42 am

    “It’s commendable that someone of his stature would take the time to thoughtfully acknowledge and respond to criticisms of his work in such a transparent way.” Oh, really? I thought our only master was the scientific truth?

    • Kris Rhodes February 21, 2017 at 10:43 am

      I don’t understand what you’re saying, can you expand on it? What is the claim or argument you’re implying with your rhetorical question?

  • Matt Steinglass February 21, 2017 at 10:38 am

    Layman’s query: isn’t it possible there were other studies that found no significant effect, but were not cited in his bibliography? Wouldn’t that lower the inflation?

    • Dan Riley February 26, 2017 at 5:12 pm

      The implication of the low R-index is that there likely were other studies that found no significant effect; including those studies would have lowered the significance of the effect. Kahneman’s reference to the “file-drawer” suggests that those other studies were never published, so he couldn’t have cited them, but he should have suspected they existed.

  • Louis Chartrand February 21, 2017 at 2:35 pm

    Terence Murphy
    “It’s commendable that someone of his stature would take the time to thoughtfully acknowledge and respond to criticisms of his work in such a transparent way.” Oh, really? I thought our only master was the scientific truth?

    I have a similar unease. It’s not that it’s harder for a high-profile researcher to acknowledge error, it’s that it’s easier for such a person to get away without acknowledging it. No lower-profile researcher would ever get in the kind of ramblings Colin McGinn did to explain his sexual harassment, because s/he’d be shut down from the get-go.

  • Anonymous February 22, 2017 at 8:04 am

    “I am still attached to every study that I cited, and have not unbelieved them, to use Daniel Gilbert’s phrase. I would be happy to see each of them replicated in a large sample. ”

    Don’t worry professor Kahneman, since psychology researchers and journals are allowed to pick and choose what they want to publish (“publication bias”) it’s only a matter of time before a large replication effort will find a significant result and gets published. Then you can start believing again.

  • Jack Walson February 22, 2017 at 12:46 pm

    I was just about to praise Kahneman for his honest and humble approach to dealing with such a mistake, but I don’t think it’s enough. The book is horrible for the claims it makes and it’s deceiving the public by the millions now I guess.
    Words, as honest and nice as they are, are cheap.
    Advising people not to buy the book in it’s current state and talking to his publisher to stop possible reprints would be something praiseworthy.
    Using his influence in psychology to push for better research standards would be incredibly praiseworthy.

    To read a book by a legend like him and coming to the conclusion that a book full of anecdotal evidence would likely be more informative and less erroneous. Just sad.

  • lhac February 23, 2017 at 7:16 am

    There is no such thing as a Nobel-Prize in economics, and thus Kahneman is not a Nobel-Prize winner.
    The price he won is called “The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel” and not “The Nobel Prize in Whatever”, as the original prizes are. It’s about time people stop calling this a nobel prize.

  • Post a comment

    Threaded commenting powered by interconnect/it code.