Three common mistakes in medical journalism

I love Gary Schwitzer, a former journalism professor at the University of Minnesota and a key advocate for better health and medical reporting at Schwitzer has a quick list of the most common mistakes reporters make when writing about medical science, and I think it's something that everybody should take a look at.

Why does this bit of journalism inside-baseball matter to you? Simple. If you know how journalists are most likely to screw up, you'll be less likely to be led astray by those mistakes. And that matters a lot, especially when it comes to health science, where people are likely to make important decisions based partly on what they read in the media.

The three mistakes:

Absolute versus relative risk/benefit data

Many stories use relative risk reduction or benefit estimates without providing the absolute data. So, in other words, a drug is said to reduce the risk of hip fracture by 50% (relative risk reduction), without ever explaining that it’s a reduction from 2 fractures in 100 untreated women down to 1 fracture in 100 treated women. Yes, that’s 50%, but in order to understand the true scope of the potential benefit, people need to know that it’s only a 1% absolute risk reduction (and that all the other 99 who didn’t benefit still had to pay and still ran the risk of side effects).

Association does not equal causation

A second key observation is that journalists often fail to explain the inherent limitations in observational studies – especially that they can not establish cause and effect. They can point to a strong statistical association but they can’t prove that A causes B, or that if you do A you’ll be protected from B. But over and over we see news stories suggesting causal links. They use active verbs in inaccurately suggesting established benefits.

How we discuss screening tests

The third recurring problem I see in health news stories involves screening tests. ... “Screening,” I believe, should only be used to refer to looking for problems in people who don’t have signs or symptoms or a family history. So it’s like going into Yankee Stadium filled with 50,000 people about whom you know very little and looking for disease in all of them. ... I have heard women with breast cancer argue, for example, that mammograms saved their lives because they were found to have cancer just as their mothers did. I think that using “screening” in this context distorts the discussion because such a woman was obviously at higher risk because of her family history. She’s not just one of the 50,000 in the general population in the stadium. There were special reasons to look more closely in her. There may not be reasons to look more closely in the 49,999 others.

Via The Knight Science Journalism Tracker



  1. The fourth mistake is accepting the tyranny of the list of a small number of atomic items (i blame powerpoint™)   ;-)

  2. Where I live, I’m constantly gritting my teeth through newscasts which spout this kind of stuff:
    “Two years ago, it increased by 5,430 cases over the previous year, but last year it increased by 13%.”

    Gah!d  Ff…f.f.f.f.f…ff.ff.fff..ff..!

    That’s when I reach for the bottle…

  3. The second “mistake” is a little more complicated.

    It is true that cause-and-effect claims with observational data deserve special scrutiny.  Yet, when there is a strong source of variation in the main explanatory variable, good statistical analysis with appropriate model and controls can sometimes generate plausible cause and effect claims.

    For many important questions, a random assignment study is not possible.  Insisting on random-assignment designs as a requirement for all cause-and-effect claims amounts to setting the burden of proof arbitrarily high.  A better approach is to reflect on the possible benefit or harm from a Type I error (believing a claim when it is false) and Type II error (rejecting a claim when it is true).  In some circumstances, it is good research policy to accept only random-assignment studies.  In other circumstances, it is wise to be more flexible.

  4. Sorry for being being completely obtuse and it is obviously my fault for being so, but phrases like…

    “Many stories use relative risk reduction or benefit estimates without providing the absolute data…”

    … and now I’m lost! 

    If someone can understand this “statistician” or “actuary speak” or is even smart enough to know what was written upon first glance than these readers are probably aware enough to know the perils or the “dark arts” of navigating a science article.

    As for one of your lower IQ readers, I gained nothing, understood nothing, and will go believing all the articles at Science Daily as factual and accurate due to your “over-my-head explanation.”

    Please take this as a constructive blog criticism and use your synopsis to help some us get it.

    1. You might want to read the sentence that follows the one you quoted, since it explains it pretty well…

      To re-iterate:A “50% reduction in risk” sounds like a huge amount and makes a nice headline.  This is ‘relative risk reduction’. It is often deceitful because you don’t know how big the risk is.I could also say “this reduced the risk from a 1-in-a-million chance to a 1-in-2-million chance”.  This is the ‘absolute risk  reduction’ and provides a more useful risk assessment.Note that going from 1-in-a-million to 1-in-2-million is a 50% reduction in risk.

  5. This is a really common problem with how drug studies are advertised to physicians and patients alike.  A drug is touted as reducing someone’s risk of (let’s say) a heart attack by 30%.  If you look at the probability of having an MI as being in the range of 3-5% over 5 years for that target population, the actual risk reduction is only 1-2%.  Another way to look at it (and a useful way to describe that situation to patients) is called NNT or “number needed to treat” . In other words I would need to treat approximately 100 people like that patient for 5 years for one of them to avoid having a heart attack.

  6. Nick says, “In other words I would need to treat approximately 100 people like that
    patient for 5 years for one of them to avoid having a heart attack.”  The points made in the article are valid.  Nick’s comment is not. Therein lies the rub.

    Nick might treat only one patient and that patient might fall within the class of persons who can benefit from the procedure or medication. When you tell someone that there is a 1% chance of something adverse occurring, the patient get’s worried, and a lot of people want to reduce the risk to zero.

    There is an addition shortcoming in many medical articles. They rarely tell you why you should accept as adequate, purposes, the size of group from which they derive their statistical conclusions. Is it any wonder that the appropriateness of eating or not eating certain foods, taking or not taking certain medication or having/not having a certain medical procedure seems t go on an acceptability roller-coaster ride every year.

Comments are closed.