May 20, 2012 at 3:35 pm (Evidence, Miscellaneous) (Anecdotal Experience, EBM, Evidence-Based Medicine, Expert Opinion, Meta Analysis, Observational Studies, Placebo, Randomised Controlled Trial, RCT, Regression To The Mean, Systematic Review)
On the unimpressiveness of personal anecdotes and the usefulness of clinical evidence.
What is Evidence-Based Medicine?
In 1996, Sackett et al wrote this article – Evidence based medicine: what it is and what it isn’t. The authors open the article by defining evidence-based medicine as “integrating individual clinical expertise and the best external evidence” and go on to describe EBM as “the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients” (defining ‘current best evidence’ as clinically relevant research, especially from patient-centred clinical research into the accuracy and precision of diagnostic tests, the power of prognostic markers, and the efficacy and safety of therapeutic, rehabilitative, and preventive regimens).
So, while the individual clinical expertise of a doctor is important, it’s necessary to incorporate the best available evidence – and to do this it is necessary to have an idea of what type of evidence might be relatively reliable and why.
Wikipedia has a very short but still useful article on hierarchies of evidence.
Although there is no single, universally-accepted hierarchy of evidence, there is broad agreement on the relative strength of the principal types of research. Randomized controlled trials (RCTs) rank above observational studies, while expert opinion and anecdotal experience are ranked at the bottom. Some evidence hierarchies place systematic review and meta analysis above RCTs, since these often combine data from multiple RCTs, and possibly from other study types as well. Evidence hierarchies are integral to evidence-based medicine.
Why are RCTs at (or near) the top of the tree and anecdotal experience at the bottom? Firstly, there is a danger with anecdotal experience that you might forget the negative anecdotes and remember only the positive ones. There’s the risk of various forms of cognitive bias affecting the gathering or presentation of evidence. There is certainly, in the case of a medical therapy, a danger that your anecdotal experience of success following treatment with therapy x or y might be misleading – perhaps the patient would have got better anyway (regression to the mean, fluctuation of symptoms, spontaneous improvement), or maybe there was a placebo effect or it was something other than x or y that was responsible for the recovery (perhaps another therapy that you were trying at the same time).
This is why controlled trials are conducted. A control group is needed to allow comparison with the treatment group. If the people in the treatment and control groups have similar rates of improvement then that would suggest that the treatment was not responsible for any improvement – because it cannot have been responsible for the improvement in those in the control group.
The two groups – control and treatment – are randomised. Randomisation reduces the risk of there being differences between the two groups – in effect, it stops you from cherry-picking promising patients for your treatment group and biasing the study. Lack of randomisation risks imbalances in baseline characteristics between two study groups – what if you are studying a condition that is linked to smoking and there are more smokers in the control group? If smokers with this condition have a poorer prognosis than non-smokers then this imbalance will have an effect on the results: it will bias the study in favour of the treatment.
Ideally, your randomised controlled trial will be double-blind. This means that neither the subjects nor the researchers know who is in which group. If you know you are in the treatment group then you will likely have an expectation of improvement. If the researcher knows you are in the treatment group they might influence your expectations, or this knowledge might have an influence on their reporting of your progress.
The control group, the randomisation, and the blinding of participants and researchers are all attempts to minimise bias. There is no minimisation of bias in anecdotal experience, and there are many forms of cognitive bias that may make anecdotal experience unreliable. You cannot trust the evidence of your own eyes – because that evidence has been filtered by a brain prone to a huge number of cognitive biases; there is even a cognitive bias of failing to compensate for your own cognitive biases, the bias blind spot.
Now, if proficient doctors – with clinical experience to aid their judgement – consider it necessary to use the best available external evidence, what might make a layperson think that they would be able to (without clinical experience, without medical training, and without taking into account the best available evidence) judge whether or not a treatment worked for them? I’m not sure. But some most certainly do think that they are able to come to conclusions on the efficacy of a treatment – without having any regard for their lack of training or lack of experience, and ignoring reliable evidence in the form of clinical trials. I know this because they keep popping up in the comments section of my blog telling me that vitamin pills cured their histadelia or homeopathy worked for them. Perhaps the Dunning-Kruger effect is relevant?
Some links on EBM and hierarchies of evidence:
SUNY: http://library.downstate.edu/EBM2/2100.htm (an EBM tutorial from a Brooklyn medical centre)
Warwick: http://www2.warwick.ac.uk/services/library/tealea/sciences/medicine/evidence/hierachcy/ (EBM tutorial from Warwick Uni)
JAMA: http://jama.jamanetwork.com/article.aspx?volume=284&issue=10&page=1290 (“Unsystematic clinical observations are limited by small sample size and, more importantly, by limitations in human processes of making inferences […] Given the limitations of unsystematic clinical observations and physiologic rationale, EBM suggests a hierarchy of evidence […] Clinical research goes beyond unsystematic clinical observation in providing strategies that avoid or attenuate the spurious results.”)
It is always worth visiting the virtual James Lind Library.