For me, the idea that the position of the stars can shape our personalities or our destinies doesn’t feel that bizarre. Probably because it’s so familiar. (See for example its influence on language and literature. Or the horoscopes that appear in pretty much every newspaper.) Some have been sufficiently interested in the claims of astrologers that they’ve tested how accurate they are when it comes to making predictions about people’s personalities or the future.
In Enemies of Reason, Richard Dawkins conducted a fun little test, in which 20 people were asked to read that week’s horoscope for Capricorn but told that the horoscope referred to their star sign. Half the people agreed that the horoscope was accurate for them. All but the sole Capricorn in the group should have disagreed. (As it turned out, the Capricorn was one of the people who thought the horoscope did not apply to them.)
McGrew and McFall conducted an experiment to see whether expert astrologers could match astrological birth charts to case files containing information on the subjects’ life histories, full-face and profile photographs, and personality profiles:
The results were clear-cut. Six expert astrologers failed to do significantly better than chance or than a non-astrologer control subject at matching birth information to the corresponding case materials for 23 individuals. The astrologers and control subject also did no better at the matching task than ten judges who attempted to rank order the ages of the 23 test cases solely on the basis of photographs. Astrologers’ predictive accuracy was unrelated to their level of confidence in their predictions. Furthermore, there was little or no predictive agreement among the astrologers, even though the astrologers purported to be using the same system and methods to arrive at their predictions. Overall, the astrologers probably could have done just as well if they had matched the birth information with the case materials in a random manner.
The predictive accuracy of an astrologer was unrelated to their level of confidence in their predictions. We tend to put more trust in people who make confident predictions, but perhaps we shouldn’t. People who make poor predictions or come to erroneous conclusions quite often have an unreasonably high degree of confidence in their abilities. The astrologers in this study performed no better than chance. However, when faced with this result they reacted not by modifying their views – but by making excuses.
. . . in many cases, the correct answer contained the attributes we had chosen, but in a different [astrological] position. . . . one big mistake was in agreeing to use young subjects. This was the Saturn/Neptune conjunction group, of course, which produced many ‘lost souls’ . . . Like medicine, the law, and theology, astrology may not always give quantifiable results-but it works, nonetheless. (Mull, 1986)
This reaction is to be expected. The final excuse quoted above will probably be familiar to anyone who has tried to discuss homeopathy with an advocate – it doesn’t work in trials, but it works for them.
When I wrote about homeopathic excuses, arguing that it was possible for there to be a placebo effect in animals, someone actually spent time typing (or maybe just copying and pasting) the following in the comments section:
there’s no way that the animal can be responding on a placebo, as it doesn’t know it’s being given something […] the answer to your question is, yes, it’s highly implausible that the adult humans can be fooled as you suggest, regardless of what Clever Hans may have done. I appreciate that that’s most disappointing, and that you really don’t want to have to believe this.
I responded to the claims that it was impossible for an animal to respond to a placebo and implausible that adult humans could be fooled by looking up trials on homeopathy for cows and commenting on what I’d found. I pointed out that in one trial, while the effects of the antibiotic treatment could be distinguished from placebo, the effects of homeopathy could not.
The reaction? First, the commentator claimed that the studies were irrelevant, as the subjects were human. The subjects of each of the studies I’d commented on were in fact cows. This gave some indication as to how closely the commentator had read the papers.
Once they understood that the papers in question were actually addressing homeopathy for cows, they began to form new objections. Remedies in one trial were pre-selected, another trial used prophylaxis (why this would be a problem when the commentator in question had previously claimed to use prophylaxis on his own farm, I’m not sure), the third trial was too small and (gasp!) relied on RCT methodology. Apparently, a large number of anecdotes are better evidence than a controlled trial. Here’s a brief snippet from one of the (very long) comments they left:
When I refer dismissively to the RCT methodology in terms of trialling homoeopathy, I do so because it is not applicable compared to the cases where one might be trialling an individual remedy.
The complaint about RCTs being incompatible with individualisation was particularly impressive, given that this was an excuse I’d addressed in the homeopathic excuses post that the commentator had previously responded to (prompting my post on homeopathy for cows).
Homeopaths and astrologers aren’t alone in preferring to make excuses rather than deal with disconfirming evidence by modifying their beliefs: advocates of the hypothesis that XMRV is linked to ME/CFS are currently struggling to reconcile their belief with the disconfirming evidence of Prapotka et al.
One online critic claimed that “…in Paprotka they describe two types of PCR. One is quantitative real-time PCR and one is a qPCR.” I wasn’t entirely sure how this would have supported their argument, but it turned out to be a misunderstanding in any case. The corresponding author stated (in an email) that “…our paper defined qPCR as quantitative real-time PCR, and when we say qPCR we are referring to the real-time PCR experiments”, but the critic seemed disinclined to accept that the corresponding author understood his own paper better than they did. It seemed to me that this was an example of somebody who had an unreasonably high degree of confidence in their assertions.
Authors of other papers on XMRV have engaged with commentators on the internet. Dusty Miller gave a statement to the blog CFS Central, and provided answers to those commenting on the blog. He even took the time to explain why a poster on ME/CFS Forums was wrong to claim that “only 30% of the 22Rv1 envelope has been sequenced”. Miller says that this would be an excusable mistake if the poster “didn’t trash the opinions of all experienced virologists and portray himself as knowing all the answers”.
Ah, the Arrogance of Ignorance.
Image credit: Chris Brennan