“Utterly awful:” David Gorski weighs in on yet another paper linking vaccines and autism

David Gorski, via Wayne State

Retraction Watch readers may be forgiven for thinking that there has been at least a small uptick in the papers that claim to link autism and vaccines, and yet tend to raise more questions than they answer. Sometimes, they are retracted. See here, here and here, for example. We talk to David Gorski, well known for his fights against pseudoscience, about the most recent example.

Retraction Watch (RW): You describe a recent paper reporting high levels of aluminum in the brains of people with autism as “utterly awful.” What are your main criticisms of the paper?

David Gorski (DG): There are no controls. Medians Means are used instead of means medians. There was no attempt even to explain why there were huge variations in readings for their tissue replicates. I can’t comment on the details of the fluorescence microscopy images, but talking to people I know who do have expertise there, I find them unimpressive.

RW: The paper appears in Journal of Trace Elements in Medicine and Biology, an Elsevier publication with  an impact factor of 3.225. Are you surprised these problems weren’t picked up by the editors and reviewers?

DG: I don’t know how that journal works, but the short time frame between publication, revision, resubmission, and publication makes me suspicious that the peer review was not what it should be. The paper was submitted on October 26, a revised version was resubmitted on November 21, and the final was accepted on November 23 — and published online November 26. That’s an awfully quick turnaround.

Also, the lead author, Christopher Exley at Keele University, sits on the editorial board of the journal, which makes me wonder if there’s a sufficient firewall between the editorial board and the review process. But I don’t know. I only see red flags.

RW: You compare the lead author of this paper — Christopher Exley — with the lead author of another paper allegedly linking autism to components of vaccines, Christopher Shaw, who has had to retract two papers for data issues. What’s the similarities between them, besides the subject matter?

DG: Both started out as reasonable scientists and got sucked into the maw of bad antivaccine science. Both have been funded extensively by the rabidly antivaccine Children’s Medical Safety Research Institute. Both claim not to be antivaccine but regularly say things that show they are, They show up in antivaccine propaganda films. For instance Shaw was in the antivaccine propaganda film The Greater Good, while Exley was recently in the antivaccine propaganda documentary Injecting Aluminum. They’re both popular now in various antivaccine groups such as Autism One, a conference devoted to promoting the discredited idea that vaccines cause autism.

RW: We have covered a number of retractions of papers reporting findings suggesting vaccines may have health problems. Does this surprise you?

DG: Of course not. At best these studies are virtually always incredibly bad science. At worst, they can be fraudulent.

Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our daily digest. Click here to review our Comments Policy. For a sneak peek at what we’re working on, click here. If you have comments or feedback, you can reach us at retractionwatchteam@gmail.com.

22 thoughts on ““Utterly awful:” David Gorski weighs in on yet another paper linking vaccines and autism”

  1. Whatever the merits of the paper and the research it reports, publishing in a journal that one sits on the Editorial Board is not evidence of misbehavior. Being in the board is usually a sign of eminence, and of service to the journal. As an Rfitir, I am generally pleased to have members of our Board submit to the journal. These articles go through the normal review (double-blind) process.

    1. I think Gorski mentions the Editorial Board issue as one of several explanations how such a presumably bad (I haven’t read it, so can’t comment on that) paper made it into the journal.

      I did note another paper in the same forthcoming issue that links thimerosal to ADHD, https://doi.org/10.1016/j.jtemb.2017.11.001. Googling the authors and their affiliations was an interesting experience. The kindest thing to say is that the obtained information did not increase my confidence in the conclusions drawn in that paper…

    2. Unless it is an editorial, submitting to a journal where one is a board member presents an inherent conflict of interests, double-blind process or not. Besides, the journal in question does not offer double-blind peer review.

      As per section 2.1.3 Conflicts of Interest (https://www.councilscienceeditors.org/resource-library/editorial-policies/white-paper-on-publication-ethics/2-1-editor-roles-and-responsibilities/) by the Council of Science Editors
      “Also, editors should submit their own manuscripts to the journal only if full masking of the process can be ensured (e.g., anonymity of the peer reviewers and lack of access to records of their own manuscript)”

      This is impossible as any handling editor who knows his reviewer pool will be tempted to submit such a manuscript to an “easier” reviewer.

      An interesting “case study” was published on the subject a few years ago here (http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0083709)

    1. It is other ways around, they actually used means with rather large SDs, while it would be more appropriated to do medians and non-parametric statistics. In generals, though N=5, no controls, no CIs, no history or environmental context, no blind quantitative image analysis…

    2. You read my mind!!! Median are most certainly less affected by outliers than means. Median is the better choice for reporting in all cases.

    3. Medians can be less sensitive to occasional extreme values that are genuinely not part of the distribution, like a data entry or measurement error. However, medians are much more biased than means with skewed distributions when the samples are small. And when the samples are big they’re just measuring different things (the median of household income is a very different thing than the mean). Also, in a field where most people use means a paper that uses medians looks like researchers testing means, failing to find what they want, and then testing medians.

  2. Medians are indeed useful, especially with highly skewed distributions, as Nahhf suggests.

    However, I see no mention of medians in the article, nor in the supplementals. So that’s an unfortunate side-tracking slip-up in commentary.

    The article is indeed a poor attempt at something resembling a scientific article, for other reasons mentioned here and on the PubPeer comments for this article


    lack of controls and very small sample size being major flaws. With much “data” generated for such a small sample, something spurious is likely to appear.

  3. Dr. Gorski covered this paper in more detail here: https://sciencebasedmedicine.org/move-over-christopher-shaw-theres-a-new-antivaccine-scientist-in-town/ The authors took pains to mention that pediatric vaccines include aluminum adjuvant and that human exposure to aluminum has been implicated in the development of autism spectrum disorders (ASD). Their study involved analysis of brain tissue from 5 patients with ASD – a very small number and with no controls as Dr. Gorski points out. They found aluminum in the brain tissue – low levels in two replicates and a very high level in one replicate (in some cases), resulting in a “mean” aluminum level that registered as high. Meanwhile, they did nothing to establish whether the patients had ever been vaccinated or whether exposure to unusual amounts of aluminum had occurred before the onset of disease (aluminum is ubiquitous in the environment and there are multiple opportunities for exposure). So, the paper smells bad. There are plenty of other problems with it.

  4. Instead of a control they compared the amounts to what they called “pathologically benign” (approx. ‘normal’) from a 60 brain study. Is there a problem with that?

    Aluminum was found at varying amounts in different samples, but this could simply mean that it is not evenly distributed throughout the brain, which seems like an interesting finding. Should we expect metals that don’t belong in the brain to be evenly distributed in every situation?

    And I’m not sure why it matters what the source of exposure was. Even if, for example, a person had been vaccinated with aluminum, had eaten from aluminum cookware or dishes, lived in an area of high aluminum in the soil, and lived near an aluminum refinery, it would still be hard to pinpoint the source of the aluminum in the brain (especially as the rate of absorption via inhalation, ingestion and injection would vary dramatically). And with so few samples (normal for a study that requires extensive examination of body parts) it probably would be impossible to draw statistically significant conclusions.

    1. With regard to the 60 brain study constituting “controls,” the authors make no attempt to ascertain whether the 60 brain study subjects came from the same population as the ASD cases. Further, without appropriate statistical and background investigation, a sample of 60 brains may or may not provide sufficient accuracy and precision.

      With regard to the varying amounts of aluminum, the authors specifically used the word “replicate,” not “samples.” The purpose of replicates is to assess the quality control of the experiment, not to facilitate causal inference. The level of variation within replicates is what the paper’s critics are concerned about because it suggests the experiment did not have good quality control.

      Your question concerning the importance of the source of aluminum is best directed to the authors, since the authors attempted to identify a putative link between vaccines and ASD.

  5. On the means-versus-medians issue, I wonder if something was inadvertently misstated during the interview or transcription process.

    On Dr. Gorski’s blog post critiquing the article, he pretty emphatically criticizes the paper’s use of a mean – not a median – to summarize data sets that tend to have a small number of distant outliers that skew the reported means upward.

  6. Regarding the mean (used by the paper) versus median (recommended by Gorski), if you have a large number of samples, most of which most have a very low value of something, and a small number with a high value, the median will tell you that the ‘average’ value is very low, which could be misleading if you had a toxin that tended to congregate in a small number of places. I’m not sure a mean or median for three values is useful, but the paper gives an overall mean per donor, and in total for all 5, and there the mean is clearly more useful than the median, which would just indicate that Aluminum levels are in the normal range (i.e. around 1).

    1. If you have “a large number of samples, most of which most have a very low value of something, and a small number with a high value” then either your high values are outliers (which should be demonstrated statistically), or your measurements represent two populations (which should be demonstrated statistically). Considering experimental design used in this paper, no statistical analysis is useful because the dataset is garbage.

      1. If you took population density samples from an unknown planet you might find radically different densities at different locations. I think that would be telling you something important, not that the planet was garbage.

      1. The difference between a “sample” a “replicate” is irrelevant. It’s too different words. I agree that, in this context, “sample” would be a better word, but it doesn’t change anything. Three samples (or replicates) were taken from each area in each brain, and they had quite different values. The important question is why the values were so different. It could be a methodological problem, or it could be because aluminum is not distributed evenly throughout the brain.

        1. There actually is a big difference. From each lobe they had one sample that they cut into three replicates. Now one of these show a massive outlier compared to the other two. You might say there was an accumulation in part of the sample, but the paper does not account for the outlier.
          If it is the case, like you state, that aluminium is not evenly distibuted through the brain, then the mean values are meaningless since these more or less assume an even distribuition.

          “the paper gives an overall mean per donor, and in total for all 5, and there the mean is clearly more useful than the median, which would just indicate that Aluminum levels are in the normal range”

          If I read this correctly, you want the research to show high levels and the analytical method should be chosen to achieve that. I would think you should choose the method that gives the most unbiased result because you want to know what’s going on…
          Of course in this study the point between means and medians is moot as far as I’m concerned without an extensive explanation for the outliers… otherwise GIGO

  7. My interview with Professor Chris Exley answers a lot of the questions posed here (thanks to the people in this discussion for posing good questions). Some of the answers are surprising, such as why mean values were included:

    My biggest question for Retraction Watch is why a critic of the paper was interviewed, but not Chris Exley or another co-author.

Leave a Reply to Alison McCook Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.