Three common mistakes in medical journalism


7 Responses to “Three common mistakes in medical journalism”

  1. herrnichte says:

    The fourth mistake is accepting the tyranny of the list of a small number of atomic items (i blame powerpoint™)   ;-)

  2. Paul Renault says:

    Where I live, I’m constantly gritting my teeth through newscasts which spout this kind of stuff:
    “Two years ago, it increased by 5,430 cases over the previous year, but last year it increased by 13%.”

    Gah!d  Ff…f.f.f.f.f…ff.ff.fff..ff..!

    That’s when I reach for the bottle…

  3. usfoodpolicy says:

    The second “mistake” is a little more complicated.

    It is true that cause-and-effect claims with observational data deserve special scrutiny.  Yet, when there is a strong source of variation in the main explanatory variable, good statistical analysis with appropriate model and controls can sometimes generate plausible cause and effect claims.

    For many important questions, a random assignment study is not possible.  Insisting on random-assignment designs as a requirement for all cause-and-effect claims amounts to setting the burden of proof arbitrarily high.  A better approach is to reflect on the possible benefit or harm from a Type I error (believing a claim when it is false) and Type II error (rejecting a claim when it is true).  In some circumstances, it is good research policy to accept only random-assignment studies.  In other circumstances, it is wise to be more flexible.

  4. SCAQTony says:

    Sorry for being being completely obtuse and it is obviously my fault for being so, but phrases like…

    “Many stories use relative risk reduction or benefit estimates without providing the absolute data…”

    … and now I’m lost! 

    If someone can understand this “statistician” or “actuary speak” or is even smart enough to know what was written upon first glance than these readers are probably aware enough to know the perils or the “dark arts” of navigating a science article.

    As for one of your lower IQ readers, I gained nothing, understood nothing, and will go believing all the articles at Science Daily as factual and accurate due to your “over-my-head explanation.”

    Please take this as a constructive blog criticism and use your synopsis to help some us get it.

    • Camp Freddie says:

      You might want to read the sentence that follows the one you quoted, since it explains it pretty well…

      To re-iterate:A “50% reduction in risk” sounds like a huge amount and makes a nice headline.  This is ‘relative risk reduction’. It is often deceitful because you don’t know how big the risk is.I could also say “this reduced the risk from a 1-in-a-million chance to a 1-in-2-million chance”.  This is the ‘absolute risk  reduction’ and provides a more useful risk assessment.Note that going from 1-in-a-million to 1-in-2-million is a 50% reduction in risk.

  5. nickdoc47 says:

    This is a really common problem with how drug studies are advertised to physicians and patients alike.  A drug is touted as reducing someone’s risk of (let’s say) a heart attack by 30%.  If you look at the probability of having an MI as being in the range of 3-5% over 5 years for that target population, the actual risk reduction is only 1-2%.  Another way to look at it (and a useful way to describe that situation to patients) is called NNT or “number needed to treat” . In other words I would need to treat approximately 100 people like that patient for 5 years for one of them to avoid having a heart attack.

  6. Manydocs says:

    Nick says, “In other words I would need to treat approximately 100 people like that
    patient for 5 years for one of them to avoid having a heart attack.”  The points made in the article are valid.  Nick’s comment is not. Therein lies the rub.

    Nick might treat only one patient and that patient might fall within the class of persons who can benefit from the procedure or medication. When you tell someone that there is a 1% chance of something adverse occurring, the patient get’s worried, and a lot of people want to reduce the risk to zero.

    There is an addition shortcoming in many medical articles. They rarely tell you why you should accept as adequate, purposes, the size of group from which they derive their statistical conclusions. Is it any wonder that the appropriateness of eating or not eating certain foods, taking or not taking certain medication or having/not having a certain medical procedure seems t go on an acceptability roller-coaster ride every year.

Leave a Reply