Is the bulk of fMRI data questionable?

Anders Eklund
Anders Eklund, via Linköping University

Last week, a study brought into question years of research conducted using the neuroimaging technique functional magnetic resonance imaging (fMRI). The new paper, published in PNAS, particularly raised eyebrows for suggesting that the rates of false positives in studies using fMRI could be up to 70%, which may affect many of the approximately 40,000 studies in academic literature that have so far used the technique. We spoke to the Anders Eklund, from Linköping University in Sweden, who was the first author of the study.

Retraction Watch: What does your study show, and what are its implications?

Anders Eklund: Our study shows that the most common statistical methods used for fMRI analysis are based on assumptions that are not always correct. The implications are that researchers may, for example, have found differences in brain activity between two groups of subjects (e.g healthy controls and subjects with some disease / condition), when there is actually no difference between the groups.

RW: Why is the rate of false positives you report so drastically different from what other studies that use fMRI have shown?

AE: The statistical methods work well in some cases, resulting in close to 5% false positives, while some other cases lead to a much higher degree of false positives. The main reason is that the methods fail to model the noise from the MR scanner.

RW: How many of the approximately 40,000 studies that use this technique in literature are likely to be affected by the issue?

AE: It is very hard to know this exactly, mainly because the original data from these studies are not available for re-analysis. Co-author Thomas Nichols has estimated that 3,500 papers can be affected. We do not say that these 3,500 papers are wrong, as it is very hard to know how the p-values would change by using another statistical method.

RW: Have other researchers studied how fMRI data is analyzed before?

AE: Yes, but many researchers have used simulated data (generated by a computer), because it is rather expensive to collect fMRI data (say 500 – 1000 USD per subject). Simulated data cannot mimic all properties of real data, which is the reason why we only used real fMRI data in this paper.

RW: You’ve reported on this issue before (albeit with a smaller sample) — what did that study show? Is it consistent with the results of your new PNAS paper?

AE: In our previous study we looked at single subject analysis, while we in the PNAS paper look at group analyses. In our previous study we found that the SPM software can also give a high degree of false positives for single subject analysis, and authors of the SPM software have for this reason made some changes to the software.

RW: fMRI has been around for 25 years — yet you say its statistical methods have never been validated using real data. Why do you think that is the case?

AE: The main reason is that it is expensive to collect fMRI data, while it is very cheap to simulate data with a computer. We therefore downloaded freely available fMRI data from 671 healthy controls, available through international data sharing initiatives like the 1000 functional connectomes project, and thereby saved 500,000 – 1,000,000 USD.

Another explanation is that the computers were not very fast 20 years ago. Today we have much faster computers, and can therefore run many analyses to test the statistical methods.

RW: Some have suggested that because of expense and other reasons, fMRI studies tend to be underpowered and overinterpreted. What do you have to say about this?

AE: Yes I agree that fMRI studies in general are underpowered, but it is hard to know if the results have been overinterpreted. I think that data sharing is one way to increase the statistical power of fMRI studies (several research labs work together to collect more data). Regarding overinterpretation, one possible solution is pre-registered research protocols, where researchers clearly state what kind of analyses they will perform, before the data are collected. Scientific journals must also accept to publish null findings, as long as the study is well designed and performed.

RW: What, in your opinion, is the solution to these high false positive rates?

AE: In the paper we show that another statistical method, the non-parametric permutation test, gives the expected 5% false positives for almost all settings. The permutation test is based on a lower number of assumptions, but it takes a little longer time to do the analysis. Meta analyses of existing studies (i.e. averaging results over several similar studies) can be one way to investigate if a result from one study is consistent with other studies. For future research, we hope that researchers will be better at sharing their full statistical results (not only pretty images), processing scripts, as well as the original fMRI data. This will make it possible to reproduce the results, and to re-analyze the same fMRI data several years later, when the analysis methods have improved further.

Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our new daily digest. Click here to review our Comments Policy. For a sneak peek at what we’re working on, click here.

3 thoughts on “Is the bulk of fMRI data questionable?”

  1. The one important thing that was not mentioned was that the fMRI was tested on dead salmon and showed activity, published in in Wired Magazine Sept. 18 2009, “Scanning dead salmon in fMRI machine highlights risk of red herrings”

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.