A Rough Guide to Evidence-Based Medicine

May 20, 2012 at 3:35 pm (Evidence, Miscellaneous) (, , , , , , , , , , )

On the unimpressiveness of personal anecdotes and the usefulness of clinical evidence.

What is Evidence-Based Medicine?

In 1996, Sackett et al wrote this article – Evidence based medicine: what it is and what it isn’t. The authors open the article by defining evidence-based medicine as “integrating individual clinical expertise and the best external evidence” and go on to describe EBM as “the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients” (defining ‘current best evidence’ as clinically relevant research, especially from patient-centred clinical research into the accuracy and precision of diagnostic tests, the power of prognostic markers, and the efficacy and safety of therapeutic, rehabilitative, and preventive regimens).

So, while the individual clinical expertise of a doctor is important, it’s necessary to incorporate the best available evidence – and to do this it is necessary to have an idea of what type of evidence might be relatively reliable and why.

Evidence hierarchies

Wikipedia has a very short but still useful article on hierarchies of evidence.

Although there is no single, universally-accepted hierarchy of evidence,[1] there is broad agreement on the relative strength of the principal types of research. Randomized controlled trials (RCTs) rank above observational studies, while expert opinion and anecdotal experience are ranked at the bottom. Some evidence hierarchies place systematic review and meta analysis above RCTs, since these often combine data from multiple RCTs, and possibly from other study types as well. Evidence hierarchies are integral to evidence-based medicine.

Why are RCTs at (or near) the top of the tree and anecdotal experience at the bottom? Firstly, there is a danger with anecdotal experience that you might forget the negative anecdotes and remember only the positive ones. There’s the risk of various forms of cognitive bias affecting the gathering or presentation of evidence. There is certainly, in the case of a medical therapy, a danger that your anecdotal experience of success following treatment with therapy x or y might be misleading – perhaps the patient would have got better anyway (regression to the mean, fluctuation of symptoms, spontaneous improvement), or maybe there was a placebo effect or it was something other than x or y that was responsible for the recovery (perhaps another therapy that you were trying at the same time).

This is why controlled trials are conducted. A control group is needed to allow comparison with the treatment group. If the people in the treatment and control groups have similar rates of improvement then that would suggest that the treatment was not responsible for any improvement – because it cannot have been responsible for the improvement in those in the control group.

The two groups – control and treatment – are randomised. Randomisation reduces the risk of there being differences between the two groups – in effect, it stops you from cherry-picking promising patients for your treatment group and biasing the study. Lack of randomisation risks imbalances in baseline characteristics between two study groups – what if you are studying a condition that is linked to smoking and there are more smokers in the control group? If smokers with this condition have a poorer prognosis than non-smokers then this imbalance will have an effect on the results: it will bias the study in favour of the treatment.

Ideally, your randomised controlled trial will be double-blind. This means that neither the subjects nor the researchers know who is in which group. If you know you are in the treatment group then you will likely have an expectation of improvement. If the researcher knows you are in the treatment group they might influence your expectations, or this knowledge might have an influence on their reporting of your progress.

The control group, the randomisation, and the blinding of participants and researchers are all attempts to minimise bias. There is no minimisation of bias in anecdotal experience, and there are many forms of cognitive bias that may make anecdotal experience unreliable. You cannot trust the evidence of your own eyes – because that evidence has been filtered by a brain prone to a huge number of cognitive biases; there is even a cognitive bias of failing to compensate for your own cognitive biases, the bias blind spot.

The Amateur

Now, if proficient doctors – with clinical experience to aid their judgement – consider it necessary to use the best available external evidence, what might make a layperson think that they would be able to (without clinical experience, without medical training, and without taking into account the best available evidence) judge whether or not a treatment worked for them? I’m not sure. But some most certainly do think that they are able to come to conclusions on the efficacy of a treatment – without having any regard for their lack of training or lack of experience, and ignoring reliable evidence in the form of clinical trials. I know this because they keep popping up in the comments section of my blog telling me that vitamin pills cured their histadelia or homeopathy worked for them. Perhaps the Dunning-Kruger effect is relevant?


Some links on EBM and hierarchies of evidence:

SUNY: http://library.downstate.edu/EBM2/2100.htm (an EBM tutorial from a Brooklyn medical centre)

Warwick: http://www2.warwick.ac.uk/services/library/tealea/sciences/medicine/evidence/hierachcy/ (EBM tutorial from Warwick Uni)

JAMA: http://jama.jamanetwork.com/article.aspx?volume=284&issue=10&page=1290 (“Unsystematic clinical observations are limited by small sample size and, more importantly, by limitations in human processes of making inferences […] Given the limitations of unsystematic clinical observations and physiologic rationale, EBM suggests a hierarchy of evidence […] Clinical research goes beyond unsystematic clinical observation in providing strategies that avoid or attenuate the spurious results.”)

It is always worth visiting the virtual James Lind Library.


  1. Zeno said,

    The problem with quacks is that they frequently want the individual clinical ‘expertise’ of their practitioner to take pride of place and usurp evidence from higher up the hierarchy – particularly when this more robust evidence is less than favourable towards their particular brand of quackery.

    As an aside, David Salsburg’s book, The Lady Tasting Tea – How Statistics Revolutionized Science in the Twentieth Century, describes how R A Fisher first realised how important randomisation was to ensure robust results. This was in 1923 at Rothamsted Agricultural Experimental Station – the same place that’s been in the news recently over GM crops.

    The book is a gentle meander through the history of stats and the people and personalities involved including Fisher, Pearson, ‘Student’ (aka W S Gossett) and many others (including, surprisingly to me, John Maynard Keynes) and would be a great read for any quack – they might just learn why controls and randomisation are so important and why we need robust trials, analysed competently by good statisticians.

  2. alcaponejunior said,

    I thought Dunning-Kruger was the entire basis of alternative, anecdote based medicine.

  3. Cybertiger said,


  4. jdc325 said,


    There are some particularly good examples. I’d link to HolfordWatch on Dunning-Kruger and the well-known media nutritionist but the site appears to have fallen down the memory hole.

  5. jdc325 said,


    Thanks for the recommendation. I’ve put The Lady Tasting Tea on my to-read list.

  6. CPD via Favourites (up to 23rd May) « Teaching Science said,

    […] while we’re talking about science students, A Rough Guide to Evidence-Based Medicine by @jdc325 would provide an excellent reading assignment. They should get a lot out of it even if […]

  7. dingo199 said,

    thanks for this jdc- very useful. I may steal it.

  8. Anecdotal Evidence « Stuff And Nonsense said,

    […] we were able to tell what worked and what didn’t simply by trying it. Sadly, we can’t. Here, I wrote a rough guide to evidence-based medicine. EBM might not be perfect but carefully gathered […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: