Are expert predictions more accurate than predictions generated by computer analysis of data? I’ve been reading Ian Ayres’ book Super Crunchers, which looks at this question in a chapter titled “Experts versus Equations”.
Examples of experts versus equations are given throughout the book, including a regression equation that can predict the future price of wine better than world-famous wine experts, some nifty number crunching that can more accurately pick successful baseball players than can professional scouts and a guy named Chris Snijders taking on experienced buyers (“purchasing professionals”), which was published in Purchasing and Supply Management 191 (2003). Abstract; abstract of a follow-up study.
[Aside: The book also looks at testing social policy (I wrote about recidivism and sentencing recently) in the US and there is also work being done on testing social policy in the UK. There is a link at the bottom of this page to a paper about RCTs in social intervention in the UK and in 2007 Ben Goldacre wrote about a Professor and statistician named Sheila Bird who wished to do controlled trials on social policy.]
It is possible to test whether experts perform better than number crunching machines by looking at their comparative success in making predictions. Paul Meehl was one who looked at clinical versus statistical predictions and his early conclusion was that “the literature strongly favors statistical prediction”. William Grove considers that this conclusion “has stood up extremely well” in this article from the Journal of Clinical Psychology – Clinical Versus Statistical Prediction: The Contribution of Paul E. Meehl. [PDF]
I recall John Briffa asking me during a seemed to be an appeal to authority by a man who considers himself to be an expert on the basis of his own experiences with patients – he also asked if my interest in medicine was an essentially ‘academic’ pursuit, and is fond of writing about what he finds to work “in practice”. It is not uncommon in alternative medicine to claim that the statistics may well show one thing but I, the practitioner, find the situation to be quite different. Homeopathy being an obvious example. I’m always interested when people claim to know things ‘from experience’ or to know that something works ‘in practice’ as I wonder what makes this experience different from, and more reliable than, simple anecdote. Presumably what it means is that anecdata from an expert is more valuable than anecdata from non-experts. Is it more valuable than statistical evidence though?
Experts are not quite as expert as they believe themselves to be. Ayres relates a conversation with Theodore Ruger in Super Crunchers where Ruger is asked whether he could ‘beat the machine’ and he begins by saying “I should be able to”, before correcting himself. He had momentarily fallen into the same trap as other experts in overestimating his ability to make predictions in comparison to a statistical model. Ruger wrote an essay in Columbia Law Review that showed (in terms of a Supreme Courts decisions) that “The model predicted 75% of the court’s affirm/reverse decisions correctly, while the experts collectively got 59.1% right”. If a guy who knows that the model tends to perform better than the man (in terms of prediction) can fall into this trap then it is probably an easy trap to fall into and I shouldn’t be surprised when I see other experts do likewise.
Snapshot of Ruger’s Essay. [Google Scholar’s page].