Is coffee bad for you or good for you? Does acupuncture actually work, or does it produce a placebo effect? Do kids with autism have different microbes living in their intestines, or are their gut flora largely the same as neurotypical children? These are all good examples of topics that have produced wildly conflicting results from one study to another. (Side-note: This is why knowing what a single study says about something doesn't actually tell you much. And, frankly, when you have a lot of conflicting results on anything, it's really easy for somebody to pick the five that support a given hypothesis and not tell you about the 10 that don't.)
But why do conflicting results happen? One big factor is experimental design. Turns out, there's more than one way to study the same thing. How you set up an experiment can have a big effect on the outcome. And if lots of people are using different experimental designs, it becomes difficult to accurately compare their results. At the Wonderland blog, Emily Anthes has an excellent piece about this problem, using the aforementioned research on gut flora in kids with autism as an example.
For instance, in studies of autism and microbes, investigators must decide what kind of control group they want to use. Some scientists have chosen to compare the guts of autistic kids to those of their neurotypical siblings while others have used unrelated children as controls. This choice of control group can influence the strength of the effect that researchers find–or whether they find one at all.
Scientists also know that antibiotics can have profound and long-lasting effects on our microbiomes, so they agree on the need to exclude children from these studies who have taken antibiotics recently. But what’s recently? Within the last week? Month? Three months? Each investigator has to make his or her own call when designing a study.
Then there’s the matter of how researchers collect their bacterial samples. Are they studying fecal samples? Or taking samples from inside the intestines themselves? The bacterial communities may differ in samples taken from different places.
Are pesticides helpful things that allow us to produce more food, and keep us safe from deadly diseases? Or are they dangerous things that kill animals and could possibly be hurting us in ways we haven't even totally figured out just yet?
Actually, they're both.
This week, scientists released a research paper looking at the health benefits (or lack thereof) of organic food—a category which largely exists as a response to pesticide use in agriculture. Meanwhile, Rachel Carson's book Silent Spring, which forced Americans to really think about the health impacts of pesticides for the first time, was published 50 years ago this month.
The juxtaposition really drove home a key point: We're still trying to figure out how to balance benefits and risks when it comes to technology. That's because there's no easy, one-size-fits-all answer. Should we use pesticides, or should we not use them? The data we look at is objective but the answer is, frankly, subjective. It also depends on a wide variety of factors: what specific pesticide are we talking about; where is it being used and how is it being applied; what have we done to educate the people who are applying it in proper, safe usage. It's complicated. And it's not just about how well the tools work. It's also about how we used them.
“It is not my contention,” Carson wrote in Silent Spring, “that chemical insecticides must never be used. I do contend that we have put poisonous and biologically potent chemicals indiscriminately into the hands of persons largely or wholly ignorant of their potentials for harm. We have subjected enormous numbers of people to contact with these poisons, without their consent and often without their knowledge.”
Read the rest
Over at Download the Universe, Ars Technica science editor John Timmer reviews a science ebook whose science leaves something to be desired.Read the rest
Science in fiction affects our ability to understand science in real life. For instance, you might already be familiar with the idea that detective shows on TV, particularly forensics shows like CSI, might be influencing what juries expect to see in a courtroom.
This is called the "CSI effect" and it's hotly debated. Some prosecutors think it has a real impact on jury decisions—if they don't get the fancy, scientific evidence they've been conditioned to expect then they won't convict. Meanwhile, though, empirical evidence seems to show a more complicated pattern. Surveys of more than 2000 Michigan jurors found that, while people were heavily expecting to see some high-tech forensic evidence during trials, that expectation probably had more to do with the general proliferation of technology throughout society. More interestingly, that broad expectation didn't seem to definitively influence how jurors voted during a specific trial. In other words: The jury is still out. (*Puts on sunglasses*)
A FRONTLINE documentary that airs tomorrow centers around an interesting corollary on this issue: Whether or not shows like CSI influence juries to expect more technology, they do present a wildly inaccurate portrait of how accurate that technology is. The reality is, many of the tools and techniques used in detective work have never been scientifically verified. We don't know that they actually tell us what they purport to tell us. Other forensic technologies do work, but only if you use them right—and there's no across-the-board standard guaranteeing that happens.
Even ideas you think you can trust implicitly—like fingerprint evidence—turn out to have serious flaws that are seriously under-appreciated by cops, lawyers, judges, and juries.
Brandon Mayfield, an Oregon lawyer, was at the center of international controversy in 2004 after the FBI and an independent analyst incorrectly matched his prints to a partial print found on a bag of detonators from the Madrid terrorist bombings.
Dror asked five fingerprint experts to examine what they were told were the erroneously matched prints of Mayfield. In fact, they were re-examining prints from their own past cases. Only one of the experts stuck by their previous judgments. Three reversed their previous decisions and one deemed them “inconclusive.”
Dror’s argument is that these competent and well-meaning experts were swayed by “cognitive bias”: what they knew (or thought they knew) about the case in front of them swayed their analysis. The Mayfield case and studies like Dror’s have changed how fingerprints are used in the criminal justice system. The FBI no longer testifies that fingerprints are 100 percent infallible.
The Real CSI episode of FRONTLINE airs tomorrow, April 17th. Check out the FRONTLINE website for more information.
Image: Fingerprint developed with black magnetic powder on a cool mint Listerine oral care strip, a Creative Commons Attribution (2.0) image from jackofspades's photostream.
In the course of preparing for a panel here at the Conference on World Affairs, I ran across a 2009 editorial by environmental journalist Fred Pearce, in which he explains why current global population trends aren't as horrific as they're often made out to be. I thought you should read it.
Global population is going up, Pearce writes, but that's not the same thing as saying that birth rates are going up. And, in the long run, that distinction matters. Around the world—not just in the West—human birthrates are decreasing. And they've been decreasing for a really long time.
Wherever most kids survive to adulthood, women stop having so many. That is the main reason why the number of children born to an average woman around the world has been in decline for half a century now. After peaking at between 5 and 6 per woman, it is now down to 2.6.
This is getting close to the “replacement fertility level” which, after allowing for a natural excess of boys born and women who don’t reach adulthood, is about 2.3. The UN expects global fertility to fall to 1.85 children per woman by mid-century. While a demographic “bulge” of women of child-bearing age keeps the world’s population rising for now, continuing declines in fertility will cause the world’s population to stabilize by mid-century and then probably to begin falling.
Far from ballooning, each generation will be smaller than the last. So the ecological footprint of future generations could diminish. That means we can have a shot at estimating the long-term impact of children from different countries down the generations.
What I really like about this essay, though, is how well Pearce articulates the real problem, which is over-consumption. Population and consumption might appear to be intrinsically linked, but they're not. As Pearce points out, global consumption is increasing far faster than global population and the average American family of four uses far more land, far more water, far more energy and produces far more emissions than an Ethiopian family of 11.
This is important. I've heard many, many Americans express their fears about population growth over the years. Pearce's essay makes it clear that, when you do that, you're pretty much being a concern troll. The population problem, while still real, is well on its way to solving itself. The consumption problem, not so much. Population growth is a problem of the poor. Consumption growth is a problem of the rich (which, from a global perspective, includes pretty much everyone in the United States). So when you ignore consumption and pin the blame for global sustainability issues on population, what you're doing is blaming the 99% for the mistakes of the 1%.
Read Frank Pearce's entire essay on Yale Environment 360
A key component of antibiotic resistance is the over-use of antibiotics. We talk about this a lot in the context of over-the-counter antibacterial cleansers, but there's a doctor's office side to this story, as well.
When sick people come into a doctor's office, part of what they are looking for is psychological wellness. They want to feel like somebody has listened to them and is doing something to treat their illness. Sometimes, that means they ask their doctor for antibiotics, even if antibiotics aren't the right thing to treat what they have.
In the past, and sometimes still today, doctors go ahead and prescribe antibiotics almost like a placebo. It's hard to say no to something a patient really wants, especially when it's likely to make them feel better—just because taking anything, and treating the problem, will make them feel better. But that is definitely not a good thing in the long term.
At KevinMD, family physician Dike Drummond offers some really nice advice for doctors who are struggling with how to make a patient feel better, but also want to avoid contributing to the growing antibiotic resistance problem. What I like best about Drummond's advice: It starts with empathy.
If you have a major challenge working up some empathy one of two things is happening.
You are experiencing some level of burnout. Empathy is the first thing to go when You are not getting Your needs met. This is a whole different topic and “compassion fatigue” is a well known early sign of significant burnout.
You are not fully present with the patient and their experience. In many cases this can be addressed by taking a big relaxing, releasing breath between each patient and consciously coming back into the present before opening the door.
There's a story making the rounds right now suggesting that the use of hair relaxers—products that are used more often by African American women than women of other ethnicities—might cause uterine fibroids—a painful condition experienced more often by African American women than women of other ethnicities.
Nobody really knows why African American women seem to be more prone to uterine fibroids, and, on the surface at least, this connection seems like it might make sense. Relaxers and other products contain hormones and chemicals that act like hormones. So, maybe, those things are getting absorbed into the body and leading to the growth of fibroids.
Trouble is: That's just speculation. And the evidence used to back it up is pretty flimsy. The study this story is based on looks at nothing but broad correlations: African American women have more fibroids and African American women use more of these hair care products. That's a problem, because broad correlations can be really, really misleading.
At The Urban Scientist blog, Danielle Lee (a scientist who has experience with both fibroids and hair relaxers) talks about why the "evidence" being presented in this case isn't even close to the same thing as "proof".
Parabens and phthalates can do some funky things. (I don’t trust these chemicals.) They are problematic and should be evaluated for safety, especially by the US Food and Drug Administration. Parabens can be an estrogen mimic – but only slightly it seems. But it’s everywhere – not just in Black hair care products like shampoos and perms. Parabens are preservatives in cosmetics and pharmaceuticals, so it’s in lotions, shaving gel, KY Jelly, makeup, even in food. Phthlates are what make plastics flexible, transparent, durable and strong. Exposure to these chemicals is coming from who-knows-how-many-sources: those dissolving plastic pill caplets, adhesives in bandages, toys, food packaging, even textiles, and paint.
My problem with this study is that it doesn’t eliminate all of these confounding and possibly conflicting variables. Again, what was the reason for hypothesizing hair care products for the disparity? It’s a leap – a huge leap and the data just doesn’t convince me. In my opinion, finding strong correlations of relaxer use among African-American women who happen to have fibroids is an artifact. Culturally, getting relaxers is a very typical hair-care regime among adult black women today. It’s a cultural phenomenon. So is being ashy and using lotion — which potentially has the same possible EDC risks as hair care products. These studies fall far short in making a connection between high occurrence of uterine fibroids and hair care rituals of Black women.
I see Lee pointing out a couple of problems here. One: A correlational study like this doesn't account for many, many other ways women might come into contact with the hypothetical causal agent. So it doesn't make much sense to tie causality to one, single source. Two: In a correlational study with this many variables, the connection you see might not mean what you think it means. The authors of this paper wonder whether hair care products might be causing fibroids. Okay. But it could be just that women who are more likely to have fibroids also, coincidentally, are part of a culture that uses a lot of these hair care products.
Correlational studies are important. They're a good first step in noticing patterns that can tell us something important about human health. But correlational studies can't be thought of as "proof" of anything. They're a way to notice a problem exists. But they aren't a good way of solving problems.
Unfortunately, correlational studies, like this one, make the news a lot. And, when they do, they're treated as though they represent scientific proof. That's a problem. Even if you don't have fibroids or use hair relaxers, you should read Danielle Lee's full post on this study. It'll give you a better idea about the sorts of questions you should be asking every time you see a scary science headline in the newspaper.
The Body Mass Index is a popular way to measure and assess whether someone is overweight or underweight. Basically, it's just your weight divided by your height. BMI is a simple system, but it does have some flaws. Over at the Obesity Panacea blog, Peter Janiszewski (who has a Ph.D. in exercise physiology) has a nice post explaining why BMI is sometimes useful, and also why it's not a great measurement of individual health.
Right now, I bet you're chomping at the bit with, "Yeah! The BMI doesn't distinguish between fat and muscle!" And you're right. It doesn't. A simple BMI measurement would tell you that The Rock is no different than an out-of-shape couch potato of the same height and weight. Obviously, that's a flaw in the system. But it's not the most important flaw, Janiszewki argues. In fact, he says, this well-known problem with BMI is the easiest one to work around. A simple visual check can tell your doctor whether your high BMI is due to excess muscle or excess fat.
The real problems with BMI, he says, are a lot more complicated. For one thing, the system doesn't distinguish between very real differences in health outcomes for different body types. "Pears" and "Apples" might have the same BMI, but with opposite health results. Even more troubling: You can change your lifestyle, and become objectively healthier, and your BMI might not budge.
This is particularly so if you adopt a physically active lifestyle, along with a balanced diet, but are not necessarily cutting a whole lot of calories. This lack of change in BMI or body weight is all too often interpreted as a failure, resulting in the disappointed individual resuming their inactive lifestyle and unhealthy eating patterns.
However, as we have argued most recently in a paper in the Canadian Journal of Cardiology, several lines of evidence suggest that weight loss or changes in BMI are not absolutely necessary to observe substantial health benefit from a healthy lifestyle. Thus, an apparent resistance to weight-loss should never be a reason for stopping your healthy behaviours.
The Centers for Disease Control and Prevention, and the World Health Organization, say that H5N1 bird flu kills some 60% of the human beings it manages to infect. Basically, it hasn't infected many people—because it can't be spread from person to person—but most of the people it does infect die.
But this might not be the full story.
After I posted a summary of the current controversies surrounding H5N1 research, I got an interesting email from Vincent Racaniello, a professor of microbiology at Columbia University Medical Center. Racaniello points out that the 60% death rate statistics are based on people who show up at hospitals with serious symptoms of infection. So far, there've only been about 600 cases. And, yes, about 60% of them have died.
However, they don't necessarily represent everybody who has contracted H5N1.
A death rate is only as good as statistics on the rate of infection. If you've got an inaccurate count of the number of people infected, your death rate is going to be wrong. And there's some evidence that might be the case with H5N1.
In a recent study of rural Thai villagers, sera from 800 individuals were collected and analyzed for antibodies against several avian influenza viruses, including H5N1, by hemagglutination-inhibition and neutralization assays. The results indicate that 73 participants (9.1%) had antibody titers against one of two different H5N1 strains. The authors conclude that ‘people in rural central Thailand may have experienced subclinical avian influenza virus infections’. A subclinical infection is one without apparent signs of illness.
If 9% of the rural Asian population has been subclinically infected with avian H5N1 influenza virus strains, it would dramatically change our view of the pathogenicity of the virus. Extensive serological studies must be done to determine the extent of human infection with avian H5N1 influenza viruses. Until we know how many individuals are infected with avian influenza H5N1, we must refrain from making dire conclusions about the pathogenicity of the virus.
We recently ran a guest post by Seth Roberts, detailing the case against tonsillectomies as a regularly prescribed treatment for sore throats. I was able to read Roberts' post before it went up here, and comment on it. And Roberts made several changes to the post, which I had suggested, before it ran.
In general, I'm a big fan of basing medical treatment on evidence, rather than conjecture, tradition, or mythology. And, while I have a big problem with woo-woo (shorthand for attempts at medical treatment that are based primarily on conjecture, tradition, and mythology), I do think it's important to point out that woo-woo doesn't always come from hippies. Local doctors don't always follow the evidence, either. In fact, that's one big reason why "evidence-based medicine" has to be called that, rather than just medicine. So while I didn't agree with everything Roberts had to say, I thought his key point—tonsillectomy as a treatment for sore throats isn't actually strongly supported by evidence—was a valuable one.
That said, I R NOT A MEDICAL EXPERT. And neither is Roberts. Steven Novella, however, is a medical doctor and a clinical researcher. He has a very good blog post up that points out some important flaws in Roberts post. Here's the gist of what he has to say: Roberts seems to have misunderstood some of the studies he linked to, and assigned too much importance to others. "Evidence" can mean a range of different things. Some evidence is better than others. Just because a study was published doesn't mean it's evidence worth paying attention to. And it is very easy for people to get confused by this distinction when they start trying to treat themselves with the help of Google.
For instance, Roberts provided a laundry list of potential complications of tonsillectomy and asked why the evidence-based Cochrane Review didn't talk about any of them. The problem:
He seems to take the approach of listing any possible hypothesized risk as if it is established.
Read the rest
The New York Times has a fascinating story about the current state of the science on weight loss, including the results of one recent (albeit small) study that suggests that the human body responds to weight loss by actively trying to regain weight—a finding that could help explain why it's so difficult to maintain significant weight loss, even when you are able to shed pounds.
Over the years, I've been really impressed with the stuff I've heard about microfinancng charities like KIVA. The idea of helping people in developing countries launch and support small businesses, changing their lives and the lives of their children, makes a lot of sense. And the personal stories that go with microfinancing are pretty appealing.
I'm starting to re-think my opinions on microfinancing, however, after reading some of the research done by GiveWell.org, an organization that casts an evidence-based eye on what different charities do and whether they actually get the results they claim.
It's not that microfinancing is bad, per se, GiveWell says. It's just that the system doesn't measure up to the hype. And if you've got a limited amount of money to spend on helping other people, there might be more effective ways to do it that produce more bang for your buck.
GiveWell has written a ton on this, but I'd recommend starting with a blog post of theirs from a couple of years ago called 6 Myths About Microfinance Charities that Donors Can Live Without. This piece provides a succinct breakdown of what questions you should be asking about microfinance charities, and provides lots and lots of links for deeper digging. The myth that surprised me the most:
Myth #6: microfinance works because of (a) the innovative “group lending” method; (b) targeting of women, who use loans more productively than men; (c) targeting of the poorest of the poor, who benefit most from loans.
Reality: all three of these claims are often repeated but (as far as we can tell) never backed up. The strongest available evidence is limited, but undermines all three claims.
The casual and close-range use of pepper spray on nonviolent protesters: It's not just morally bankrupt, it's also not evidence based!
Judy Stone is a doctor, infectious disease specialist, and the author of a book on how to properly conduct clinical research. She's got a guest post on Scientific American blog network looking at the scientific research that's been done to document the effects and safety of pepper spray, and how to treat exposure to pepper spray.
Shorter version: The evidence basis behind the use of pepper spray, especially in the sort of contexts one is actually likely to encounter in the real world, is woefully limited. (It's a lot like tasers that way. In both cases, the research that does exist has mostly been done using physically fit, healthy, adult subjects who are not emotionally or physically distressed in any way at the moment they are hit. They're also being hit using manufacturer recommended dosages and distances of application. Real-world data suggests there's a MASSIVE difference between the effects of that sort of scenario and, say, a terrified teenager being shot in the face at point-blank range. Or an old woman who has been walking quickly, trying to get away from police. Just to throw some hypotheticals out there.) Meanwhile, the evidence that does exist strongly suggests that police forces are currently using pepper spray in ways that are inappropriate and unsafe. Evidence. Not opinion.
Also: Liquid antacids seem to be the best way to alleviate the effects of pepper spray. Perhaps it's time to stock up on Maalox.
There are reports of the efficacy of capsaicin in crowd control, but little regarding trials of exposures. Perhaps this is because pepper spray is regulated by the Environmental Protection Agency, as a pesticide and not by the FDA.
The concentration of capsaicin in bear spray is 1-2%; it is 10-30% in “personal defense sprays.”
While the police might feel reassured by the study, “The effect of oleoresin capsicum “pepper” spray inhalation on respiratory function,” I was not. This study met the “gold standard” of clinical trials, in that it was a “randomized, cross-over controlled trial to assess the effect of Oleoresin capsicum (OC) spray inhalation on respiratory function by itself and combined with restraint.” However, while the OC exposure showed no ill effect, only 34 volunteers were exposed to only 1 sec of Cap-Stun 5.5% OC spray by inhalation “from 5 ft away as they might in the field setting (as recommended by both manufacturer and local police policies).”
By contrast, an ACLU report, “Pepper Spray Update: More Fatalities, More Questions” found, in just two years, 26 deaths after OC spraying, noting that death was more likely if the victim was also restrained. This translated to 1 death per 600 times police used spray. (The cause of death was not firmly linked to the OC). According to the ACLU, “an internal memorandum produced by the largest supplier of pepper spray to the California police and civilian markets” concludes that there may be serious risks with more than a 1 sec spray. A subsequent Department of Justice study examined another 63 deaths after pepper spray during arrests; the spray was felt to be a “contributing factor” in several.
A review in 1996 by the Division of Epidemiology of the NC DHHS and OSHA concluded that exposure to OC spray during police training constituted an unacceptable health risk.
Remember kids: When you think "pepper-spraying cop" and "unacceptable health risk," you should also think, "Lt. John Pike."