In an article on MMR and measles in the June issue of What Doctors Don’t Tell You (WDDTY), Bryan Hubbard reports on the DeStefano et al paper that found no association between autism and the number of antigens children receive from vaccination. That is what the paper actually found. What Hubbard reports is something quite different. I have no idea how Mr Hubbard manages to get it so wrong. I’d have thought pretty much anyone would be able to figure out what the researchers studied, but apparently not. Now, I’m no expert – far from it – but I think even an ignorant layman like me can work out what research question the authors were investigating.
Here’s what he wrote:
Experts say the vaccine most definitely does not cause autism—but are they right? The latest study attempting to disprove any link has been funded by America’s health agency, the Centers for Disease Control and Prevention (CDC), and was designed to finally put to rest parents’ continued concerns. By analyzing blood samples from 256 children with autism and 752 healthy children of similar ages, researchers concluded that the MMR did not cause autism.1 However, no such conclusion can be drawn. The CDC researchers used data that had been discredited two years earlier, and their analysis was so loose that the results could suggest anything from the possibility that the vaccine had a 69 per cent protective effect—in other words, roughly two out of every three children given the vaccine could be less likely to develop autism—to a 472 per cent causative effect, suggesting that children were nearly five times more likely to develop autism after the vaccination. The safest interpretation is that the MMR increases the risk of autism by 5 per cent—but that’s still a statistical sleight of hand because the reality is that it has zero effect in most children, but is a definite cause in a small minority.
Several researchers have looked at MMR and autism and found no association. The authors of the paper cited by Hubbard didn’t conclude that MMR does not cause autism. In fact, they didn’t even attempt to answer that question (which should come as little surprise given the number of researchers who have already done so – and the consistency of their results). What they did was to look at whether increasing exposure to antigens (i.e. antigens from all vaccines rather than a specific vaccine) was associated with autism (it wasn’t). Most of the exposure comes from vaccines other than MMR – particularly whole cell vaccines. This should all be quite clear to anyone who has read the paper. That Hubbard fundamentally fails to understand what the researchers were actually studying did not fill me with confidence regarding his other comments on this paper. Still, I thought I’d better take a look.
Hubbard claims that the data used by the researchers had been discredited two years earlier. Unfortunately, he neglects to tell readers who is supposed to have discredited this data or how. We’re left to take his word for it. Personally, I’m not inclined to do so. A friend helpfully pointed me at an article criticising the paper the data came from (whether this is what Hubbard relies upon, I don’t know). I don’t think criticising a paper is the same thing as discrediting the raw data that it is based on. In case you’re interested, the article I was pointed to was covered by Orac here. Perhaps Hubbard based this claim on a different article? I can’t say, as he doesn’t give any indication of the source for his claim.
Then we have the figures of a 69% protective effect or a 472% causative effect. What Hubbard appears to have done is to ignore completely the adjusted odds ratios and look at the unadjusted odds ratios (which take no account of biases or confounding*) – and to cherry pick a couple of figures that suit his purpose. The crude, unadjusted odds ratios are given (in two tables, for two different measures – cumulative exposure over time and maximum exposure in one day), along with 95% confidence intervals. The figures in these confidence intervals are as low as 0.03 and as high as 5.72 and one happens to be 0.31 – I assume this is where Hubbard has got his 69% protective figure from (if you look at the paper, the 0.31 and 5.72 figures aren’t even from the same table – one comes from cumulative exposure and the other from maximum exposure in one day).
Even if using figures taken from the confidence intervals of unadjusted odds ratios could be justified (and I’d like to see Hubbard attempt this), I don’t see how anyone could possibly justify picking the two specific figures that Hubbard has chosen. I could at least see how someone who didn’t understand what they were looking at might think it was logical to choose the lowest and highest numbers they could find in one of the tables – and perhaps even that they might think they could draw a meaningful conclusion from those figures – but Hubbard has picked the highest figure he could find and one seemingly random, lower number. He’s also picked them out from different tables, showing different things (cumulative and maximum exposure).
In his final sentence, Hubbard stops trying to torture the paper to make it tell him what he wants to hear and moves onto a different tactic – simply making things up. The claims that MMR increases the risk of autism by 5% and is a “definite cause in a minority” do not come from the DeStefano paper. They’ve been plucked out of thin air by Hubbard.
As far as I can tell, the only thing Hubbard has got right is the number of subjects in the study. The rest of his commentary is nonsense from start to finish.
*There’s a little asterisk beside the last heading in each of the tables that Hubbard’s figures come from. It directs you to an explanatory note where the authors explain that there are a number of covariates that include things like birth weight, maternal age and maternal exposures during pregnancy with the study child. Now, as I understand it, when confounding is present, the adjusted odds ratio should be reported. This is what the authors did. If you look at the abstract, the results section only reports the adjusted odds ratios. I think Hubbard would find it difficult to justify using the unadjusted odds ratios, let alone picking the highest one he could find. That he chose to select a figure taken from the confidence interval of an unadjusted odds ratio is, well, baffling.