New York Times – health and science stories

October 2, 2008 at 1:28 pm (Bad Science, Media) (, , )

Here and here are two good examples of why I think the NYT is better at reporting on health and science stories than the UK mainstream press is.

The first link is to a well-written story on using the scientific method to sort Alt Med claims. The journalist seems to understand the subject and is given sufficient space to write about it properly, two things that I think tend to be missing from the mainstream media outlets in the UK. He also explains things like this: RCTs – “In such trials, scientists randomly assign patients to treatment or control groups with the aim of eliminating bias from clinician and patient decisions”; Size of study – “The smaller the sample size […] the greater the risk of error, including false positives and false negatives”. It just seems to me to be less, well, dumbed down than the UK MSM reports.

The second piece is entitled “Searching for Clarity: A Primer on Medical Studies” and is very interesting indeed. They use the example of Frankie Avalon defending the antioxidant Beta Carotene to illustrate the importance of weighing up the evidence and assigning appropriate significance to the most important available evidence (if this were a UK paper, they’d probably be uncritically reporting Cliff Richard’s views on antioxidants). They attempt to answer this question: “That, of course, is the question about medical evidence. What are you going to believe, and why? Why should a few clinical trials trump dozens of studies involving laboratory tests, animal studies and observations of human populations?”, using the examples of Beta Carotene and the Women’s Health Initiative. At the end of this first page, they refer to compliant pill-takers – I’ll mention these compliant pill takers again a little later on.* You will note that this article has two pages. You see, NYT (unlike the UK MSM) gives sufficient space to health and science stories. The article by Gary Taubes I link to below is nine pages long, for example. I think this provision of space is helpful, but you need more than that to provide decent reports on health and science – such as journalists who have some knowledge of the subject they are writing on.

Here’s a recent story: Vitamin C and cancer drugs. The Daily Mail had a 283-word piece that seemed fairly reasonable – but they captioned their picture of some oranges “Vitamin C supplements could reduce the effectiveness of cancer drugs”. I think it’s daft to caption a picture of oranges with a scary note about vit C reducing effectiveness of cancer meds, when the authors of the study (as the Daily Mail wrote) said that patients should eat a healthy diet (i.e., they should eat fruit and veg containing vit C rather than take pills). The caption was the right one – but for the wrong picture. The Telegraph had a 345-word piece that quoted the researchers, but ended with a vitamin pill industry spokesperson pointing out that “It is important to note that this study was conducted in cancer cells, and in mice, in a laboratory setting. The researchers did not give vitamin C to human beings.” Which is a rather ironic quote, given the vitamin pill industry’s love of in vitro studies when they have positive results. I think the reason they gave space to both the researchers’ vies and the spokesperson’s views is probably for “balance”. As if a pill industry spokesperson’s opinion and research findings should carry equal weight. The NYT goes for a 391-word piece that is free from spurious attempts at balance and daft picture captions. To be fair, though – there’s not a huge difference in the way these three newspapers reported the story.

* The bias of compliance popped up in this article by Gary Taubes on epidemiology – “Do We Really Know What Makes Us Healthy?” – and is something we should think about next time a vitamin pill salesman assures us that people who take vitamin pills live longer and are healthier, because it offers an alternative explanation to the one that is preferred by the pill salesman. See also Patrick Holford on the Gladys Block study: I blogged it here (albeit somewhat inadequately).

Hat tips: PV posted one of the NYT stories on the bad science forum recently and holfordwatch posted both on their miniblog. I also first spotted the Taubes piece on the Holford Watch blog.

5 Comments

  1. Claire said,

    Dr Steven Novella over at SBM has reviewed these two articles, but (perhaps predictably), is a lot less impressed by the Broad piece on science and CAM. One of his criticisms is directed towards the favourable reference to the acupuncture trial:

    “…He then relates a 2004 study of acupuncture for knee osteoarthritis, claiming it as a success for a CAM modality. We can quibble about the study itself – the effect sizes were actually quite small and the 25% drop out rate in the acupuncture and sham acupuncture groups could wipe out the statistical significance.

    More telling, however, is that even though he quotes Dr. Berman who conducted the study for the article, neither he nor Dr. Berman mention the later meta-analysis in which Dr. Berman and the other study authors concluded:

    Sham-controlled trials show clinically irrelevant short-term benefits of acupuncture for treating knee osteoarthritis. Waiting list-controlled trials suggest clinically relevant benefits, some of which may be due to placebo or expectation effects.

    So maybe acupuncture does not have any specific effects for knee osteoarthritis after all. Perhaps it is just another beta carotine episode where promising early research does not pan out. But the reader never hears about this from Broad, nor any discussion of the huge plausibility problem with acupuncture and most of CAM. …”

  2. jdc325 said,

    Gah, no – I think I may’ve fallen victim to confirmation bias. I have to admit, perhaps I didn’t read the Broad piece as carefully as I might have done but I really thought the article was pretty reasonable on the whole. Perhaps I was overly focussed on the way they explained the purpose of RCTs and the reason why smaller studies aren’t as reliable as larger ones. The message I took from the article was that there are lots of CAM studies, but too few are RCTs and too many CAM trials have too few subjects to reliably tell us anything. I’ve re-read it though and I still think Broad’s writing is reasonable (certainly in comparison to the crap we get in the Daily Mail or the Independent).

    I still think the worst bits in Broad’s article are the quotes from CAM supporters – Broad writes things like “The high costs of good clinical trials, which can run to millions of dollars, means relatively few are done in the field of alternative therapies and relatively few of the extravagant claims are closely examined.” That is, he points out that the claims are extravagant and unevidenced – something Bad Science bloggers have been saying for years. He also writes about multi-centre trials reducing the odds of false positives and investigator bias – not something that a UK MSM journo or CAM proponent would ever draw attention to (they like to pretend CAM research is free of false positives and investigator bias). Dr Khalsa’s complaint about conventional medicine looking at magic pills or procedures to take away diseases applies equally to most (if not all) forms of CAM and Berman’s claim that acupuncture was an effective complement to conventional arthritis treatment was certainly a dodgy one and I think Khalsa and Berman can be fairly criticised for these remarks, but I don’t think Broad’s work was as bad as Novella is painting it. Interesting to see how another pair of eyes views the same article, though – thanks for the SBM link Claire.

  3. Claire said,

    What seems (inter alia) to be getting Dr Novella’s goat regarding the Broad article is the ‘prior probability’ issue. He praises Gina Kolata for including this in her piece:

    “…I was pleasantly surprised when Kolata then went a step beyond just laying out the advantages of RCTs. For the first time in a mainstream outlet that I have personally seen, she relates the importance of prior probability. She even talks about Bayes Theorem – analyzing a claim based upon prior probability and the new data. This is specifically what we advocate as science-based medicine and distinguishes SBM from evidenced-based medicine (EBM), which does not consider prior probability.

    …”

    The prior probability/inherent implausibility issue regarding CAM treatments is a theme at SBM, where the view appears to be that it has an important role in deciding where scarce public research money should be spent. He also finds that there is in the Broad article a ‘subtle presuppos[ition]’ that CAM treatments work and that its primary focus is to claim that NCCAM is raising the standard of evidence within CAM, without really tackling problems such as prior probability or the net effect of NCCAM on the quality of science within CAM. I agree with you that Dr Novella could have been more generous in acknowledging the good points made by Broad but I did come away from his NYT article with the feeling that I was being led to believe that CAM treatments work and all that’s needed is to fix the evidence lag problem, rather than to wonder if the poor quality of evidence might not be reason to be sceptical.

    While I understand the SBM lines of argument regarding prior plausibility, I am however concerned that they can be misrepresented to (and misunderstood by) the lay public in such a way as to afford a spurious claim to the moral high ground to proponents of alternative therapies. The old chestnut: mainstream medicine & big pharma are so afraid of alternative medicine that they want to shut down the research ( ignoring, of course, the tendency to then quarrel with the evidence when good quality research that shows treatments to have no more effect than placebo). Given the current state of MSM health reporting, I’m not sure notions like prior plausibility would get a proper airing.

  4. jdc325 said,

    I liked the Quackometer post that asked whether we should fund more research in CAM (specifically homeopathy). Figure 1 in his post shows the Quackometer Quackery Quadrants – showing the evidence for and plausibility of a treatment. I don’t think anyone is saying that we shouldn’t fund promising treatments that have an unknown mechanism of action, but SBM and the Quackometer have argued that we shouldn’t fund research into treatments that would require the overturning of everything we know about chemistry and physics.

  5. Claire said,

    “I don’t think anyone is saying that we shouldn’t fund promising treatments that have an unknown mechanism of action, but SBM and the Quackometer have argued that we shouldn’t fund research into treatments that would require the overturning of everything we know about chemistry and physics.”

    Precisely. The challenge, IMO, is to communicate this to the lay public, some of which might be inclined to sympathise with proponents of implausible treatments who protest that their inability to attract public research funds is evidence of mainstream medicine and pharma’s efforts to suppress them.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: