Striking new scientific study shows strikingly that scientific studies with striking results are often false

The tl;dr: If a medical study seems too good to be true, it probably is. Eryn Brown in the Los Angeles Times writes about a statistical analysis of nearly 230,000 trials compiled from a variety of disciplines, published today in the Journal of the American Medical Association. The analysis by Stanford's Dr. John Ioannidis and a team of fellow researchers looked at study results claiming a "very large effect," and found that those claims seldom ended up being true when other research teams tried to repeat the same results.

One such example: the cancer drug Avastin. Clinical trials suggested the drug might double the time breast cancer patients could live with their disease without getting worse. But follow-up studies found no improvements in progression-free survival, overall survival or patients' quality of life. As a result, the U.S. Food and Drug Administration in 2011 withdrew its approval to use the drug to treat breast cancer, though it is still approved to treat several other types of cancer.

With early glowing reports, Ioannidis said, "one should be cautious and wait for a better trial."

Read the full LAT article. Here's the JAMA paper, but you have to be a paid subscriber to read it.



    1. Damnit, there goes my “don’t trust your life to a medicine named on Talk Like a Pirate Day” joke.

  1. For the layman, there was an explanation of the ‘striking new study shows striking new studies probably false’ published recently, which explains everything pretty clearly.  I can’t remember the name of the journal, though, but it was that journal that lists all journals that don’t contain references to themselves.

    EDIT: Found the journal, but the title’s covered by a curry stain. Sorry.

  2. Might the reverse also be the case, that if results are too bad to be true they probably aren’t?  Of course there are probably very few follow-up studies when results are bad…. ah, the age of science, ya gotta love it.

    1. Initial results of studies are skewed to make the drug seem preferable not the other way round. You have to wait till impartial studies reveal the true efficiencies of the drug or a more scrumptious review of the initial data.

  3. avastin is currently used by some to treat some brain cancers (glioblastoma being one) as a second line treatment, upon progression after first line radio-chemo treatments, inevitably don’t do much.
    the glioblastoma multiforme wikipedia page has some more details.

  4. The headline you have for this story is wrong. The results aren’t false, and it then continue the ‘big companies, therefore conspiracies’ general theme a lot of people here subscribe to. A better highline might emphasize that strong results are the work of chance.

    Which isn’t that surprising, a p value of 0.05 as used in medicine means a 1 in 20 chance of being wrong. Think of how many trials there are, and you get a lot of those. Also all the drugs that “didn’t work”, well a lot of those probably did, but got a bad draw.Reading the LA times it gives a balanced view – the strongly positive results are often the result of a small sample size, or few/large number of events. Just are the strongly negative ones. The larger studies find it is probably somewhere in the middle (but still limtied by sample size, ‘true’ effect could be larger, or smaller). Meta analysis is an attempt to find this ‘true’ effect from all of the studies.

  5. We physicians rarely take interest in any one published study for this reason, unless it’s a study with a very large number of enrolled patients and we’ve been awaiting it specifically.

Comments are closed.