I sometimes hear (or read) about the “power of placebo” – there are academic papers that examine the placebo effect, there is a book titled “The Powerful Placebo: From Ancient Priest to Modern Physician“, and there are news stories about the power of placebo (there is this recent article, for example, that refers to “the magical power of placebo” in reference to $25 necklaces popular with some sportsmen).
There seems to be a tendency among some to support inert treatments on the grounds that “placebos can have real and beneficial effects” (Dr. Robert Ader, a psychologist). This story refers to:
a landmark report in 1955 called The Powerful Placebo. Viewed as groundbreaking, the analysis of dozens of studies by H.K. Beecher found that 32 percent of patients responded to a placebo
However, this abstract would seem to cast some doubt on Beecher’s analysis. It’s paywalled, so I won’t be quoting from the full text… but I have a PDF of the full text, and here is what the abstract says:
In 1955, Henry K. Beecher published the classic work entitled “The Powerful Placebo.” Since that time, 40 years ago, the placebo effect has been considered a scientific fact. Beecher was the first scientist to quantify the placebo effect. He claimed that in 15 trials with different diseases, 35% of 1082 patients were satisfactorily relieved by a placebo alone. This publication is still the most frequently cited placebo reference. Recently Beecher’s article was reanalyzed with surprising results: In contrast to his claim, no evidence was found of any placebo effect in any of the studies cited by him. There were many other factors that could account for the reported improvements in patients in these trials, but most likely there was no placebo effect whatsoever. False impressions of placebo effects can be produced in various ways. Spontaneous improvement, fluctuation of symptoms, regression to the mean, additional treatment, conditional switching of placebo treatment, scaling bias, irrelevant response variables, answers of politeness, experimental subordination, conditioned answers, neurotic or psychotic misjudgment, psychosomatic phenomena, misquotation, etc. These factors are still prevalent in modern placebo literature. The placebo topic seems to invite sloppy methodological thinking. Therefore awareness of Beecher’s mistakes and misinterpretations is essential for an appropriate interpretation of current placebo literature.
There is also a paper from 2001 in the NEJM titled Is the Placebo Powerless?— An Analysis of Clinical Trials Comparing Placebo with No Treatment, which concluded:
In conclusion, we found little evidence that placebos in general have powerful clinical effects. Placebos had no significant pooled effect on subjective or objective binary or continuous objective outcomes. We found significant effects of placebo on continuous subjective outcomes and for the treatment of pain but also bias related to larger effects in small trials. The use of placebo outside the aegis of a controlled, properly designed clinical trial cannot be recommended. [My emphasis.]
The blogger Neuroskeptic wrote an interesting post about the placebo effect (here) and the points made in this post include: that we are medically treating people who we know don’t need actual medical treatment; that “prescribing someone any kind of treatment – whether real drugs, sugar pills, CAM, or anything else – legitimizes the notion that they’re ill”; that you can do harm by leading someone to see themselves as ill unnecessarily*; and, most interesting of all for me, that “[telling] you that it’s nothing to worry about, it’s normal, just get on with your life, and it’ll pass […] you don’t see yourself as suffering from a medical problem, so you don’t expect to need treatment” could be the most powerful placebo of all. (Read more here.)
In the Bad Science book, Ben Goldacre writes about internal mammary artery ligation, an operation that was once used to improve angina. Dr Goldacre refers to “the power of placebo” in discussing the finding that a placebo operation turned out to be “just as good as the real one.”
A placebo operation in the 1950s was found to be as effective for the treatment of angina as the real operation it was being compared with. Reading the paper 50 years later, the most striking part is the discussion section, where they quietly drop the operation and nobody stands up to point out the incredibly strange discovery that a placebo operation works for anything, let alone angina.
…in the 1950s, we used to ligate the internal mammary artery to treat angina: but when someone did a placebo-controlled trial, going to theatre, making an incision, but only pretending to ligate the internal mammary, the sham operation was as effective as the real one. Like morons, instead of applauding the power of the placebo, we just stopped doing the procedure, assuming that it was “useless”.
The funnny thing is, though, that I have read elsewhere (in Dan Ariely’s Predictably Irrational) that “in both groups, the relief lasted about three months – and then complaints about chest pain returned.” If the relief was that short-lived and neither treatment (placebo, or “real”) worked in the long-term, then is it really a good example of the power of placebo? Unfortunately, I have been unable to track down a copy of the paper in question and I therefore cannot ascertain whether Dan Ariely’s characterisation of the findings is accurate.
I am as yet undecided as to the benefits of the allegedly “powerful” placebo, but it is worth noting that there may also be risks. I think I need more evidence regarding the risks and benefits of placebo in order to make up my mind.
*Note: I would point out that this is something relevant to the debate over screening women for breast cancer, as some women will “get a cancer diagnosis even though their cancer would not have led to death or sickness”, and others will have “higher, but not apparently pathologically elevated, levels of distress and anxiety” due to false positives. Telling people they are ill is not risk-free. [Edit, 15th November: there is some discussion of mammography in Gerd Gigerenzer’s Reckoning With Risk. Gigerenzer writes that: “Before age 50, mammography does not seem to have benefits, only costs. Women at 50, however, face the question of whether the potential benefits outweigh the costs.”]
Edit, 25th November 4pm:
There’s also this article on the Bad Science blog, which reports on a study where the researchers gave information sheets to one group of hotel workers and found that (as Ben Goldacre puts it) “simply being told about the value of what they were already doing caused a significant change for the better on every single one of the objective health measures recorded.” However, it is noted in a later comment that “the researchers seem not to have corrected for clustering in their data”, which “might reduce the statistical significance of the findings” – again, an apparent example of the power of placebo turns out not to be quite as good an example as it had seemed to be: comment.
Edit 27th November 10pm:
Following discussion with some lovely people on Twitter, I’ve been digging around and found some more interesting articles on the placebo effect. Here is a (rather large) PDF of a Cochrane review by Hróbjartsson and Gøtzsche:
Implications for practice
We found no evidence that placebo interventions in general have clinically important effects. A possible moderate effect on patient-reported continuous outcomes, especially pain, could not be clearly distinguished from reporting bias and other biases. We suggest that placebo interventions should not be used outside clinical trials, also for ethical reasons, as the use of placebo often involves deception.
Implications for research
The results of this review do not imply that no-treatment control groups can replace placebo control groups in randomised clinical trials without a risk of bias. Further research is needed to establish the extent to which the possible effect of placebo treatments on patient-reported continuous outcomes reflects a real effect, and to explore whether certain settings are associated with different effects of placebo.
I note that the authors point out that the “effect on patient-reported continuous outcomes, especially pain” was a “possible, moderate effect” that “could not be clearly distinguished from reporting bias and other biases.” So even in terms of pain (an area where the placebo effect might be expected to be “powerful”), the effect was moderate and could not be distinguished from bias.
Via HolfordWatch**, there is this article from the New York Times. From the article, it appears that the bias of compliance affected the placebo group [being compared to the clofibrate group in the Coronary Drug Project] to roughly the same extent as it affected those taking clofibrate. David Freeman:
“…faithfully taking the placebo cuts the death rate by a factor of two […] people who take their placebo regularly are just different than the others. The rest is a little speculative. Maybe they take better care of themselves in general. But this compliance effect is quite a big effect.”
**In the comments after the HolfordWatch piece, dvnutrix wonders “whether or not there is a higher rate of compliance amongst placebo responders” and if “compliance and associated issues might be relevant confounders for the placebo response.”
Parkinsons: Sham Procedure versus Usual Care; Mechanisms of placebo effect; Benedetti book on placebo (the “look inside” feature is worth taking a peek at); Pubmed link to Cochrane on placebo; The Placebo Prescription – The New York Times; HolfordWatch comment on Crippen re placebo; Placebo and arthroscopic surgery; Pacemakers and placebo effect; Moerman: Placebo effect in ulcers (bias of compliance). And still more: Greg Laden; PalMD; Steve Silberman.
In response to the NEJM article I link to in the main section of this blogpost there is this from Wampold et al: link, responded to by Hróbjartsson and Gøtzsche here. In response to Wampold et al and Hróbjartsson and Gøtzsche, Hunsley and Westmacott offer this:
Meta-analytic results reported by the two sets of authors are nearly identical, yet their conclusions differ dramatically. In our comment, we discuss the findings of the respective authors and consider options for representing and interpreting the magnitude of meta-analytic effect size estimates. We conclude that although the meta-analyses described indicate that placebo effects do exist and cannot be dismissed as unimportant, given contextual information, it is consistent with existing research to describe the obtained mean effect size for placebos in medicine as small in magnitude.
Edit, 2nd October 2011: At least in this study, it appears that a placebo effect can operate when the outcome of interest is self-rated improvement, but not when an objective outcome is used. This finding is in accordance with what Hróbjartsson and Gøtzsche originally reported…