— FEATURED —
— FOLLOW US —
— POLICIES —
Except where indicated, Boing Boing is licensed under a Creative Commons License permitting non-commercial sharing with attribution
— FONTS —
Is coffee bad for you or good for you? Does acupuncture actually work, or does it produce a placebo effect? Do kids with autism have different microbes living in their intestines, or are their gut flora largely the same as neurotypical children? These are all good examples of topics that have produced wildly conflicting results from one study to another. (Side-note: This is why knowing what a single study says about something doesn't actually tell you much. And, frankly, when you have a lot of conflicting results on anything, it's really easy for somebody to pick the five that support a given hypothesis and not tell you about the 10 that don't.)
But why do conflicting results happen? One big factor is experimental design. Turns out, there's more than one way to study the same thing. How you set up an experiment can have a big effect on the outcome. And if lots of people are using different experimental designs, it becomes difficult to accurately compare their results. At the Wonderland blog, Emily Anthes has an excellent piece about this problem, using the aforementioned research on gut flora in kids with autism as an example.
For instance, in studies of autism and microbes, investigators must decide what kind of control group they want to use. Some scientists have chosen to compare the guts of autistic kids to those of their neurotypical siblings while others have used unrelated children as controls. This choice of control group can influence the strength of the effect that researchers find–or whether they find one at all.
Scientists also know that antibiotics can have profound and long-lasting effects on our microbiomes, so they agree on the need to exclude children from these studies who have taken antibiotics recently. But what’s recently? Within the last week? Month? Three months? Each investigator has to make his or her own call when designing a study.
Then there’s the matter of how researchers collect their bacterial samples. Are they studying fecal samples? Or taking samples from inside the intestines themselves? The bacterial communities may differ in samples taken from different places.
Are pesticides helpful things that allow us to produce more food, and keep us safe from deadly diseases? Or are they dangerous things that kill animals and could possibly be hurting us in ways we haven't even totally figured out just yet?
Actually, they're both.
This week, scientists released a research paper looking at the health benefits (or lack thereof) of organic food—a category which largely exists as a response to pesticide use in agriculture. Meanwhile, Rachel Carson's book Silent Spring, which forced Americans to really think about the health impacts of pesticides for the first time, was published 50 years ago this month.
The juxtaposition really drove home a key point: We're still trying to figure out how to balance benefits and risks when it comes to technology. That's because there's no easy, one-size-fits-all answer. Should we use pesticides, or should we not use them? The data we look at is objective but the answer is, frankly, subjective. It also depends on a wide variety of factors: what specific pesticide are we talking about; where is it being used and how is it being applied; what have we done to educate the people who are applying it in proper, safe usage. It's complicated. And it's not just about how well the tools work. It's also about how we used them.
“It is not my contention,” Carson wrote in Silent Spring, “that chemical insecticides must never be used. I do contend that we have put poisonous and biologically potent chemicals indiscriminately into the hands of persons largely or wholly ignorant of their potentials for harm. We have subjected enormous numbers of people to contact with these poisons, without their consent and often without their knowledge.”
Read the rest
Over at Download the Universe, Ars Technica science editor John Timmer reviews a science ebook whose science leaves something to be desired. Written by J. Marvin Herndon, a physicist, Indivisible Earth presents an alternate theory that ostensibly competes with plate tectonics. Instead of Earth having a molten core and a moveable crust, Herndon proposes that this planet began its existence as the core of a gas giant, like Jupiter or Saturn. Somehow, Earth lost its thick layer of gas and the small, dense core expanded, cracking as it grew into the continents we know today. What most people think are continental plate boundaries are, to Herndon, simply seams where bits of planet ripped apart from one another.
The problem is that Herndon doesn't offer a lot of evidence to support this idea.
Once the Earth was at the center of a gas giant, Herndon thinks the intense pressure of the massive atmosphere compressed the gas giant's rocky core so that it shrunk to the point where its surface was completely covered by what we now call continental plates. In other words, the entire surface of our present planet was once much smaller, and all land mass.
I did a back-of-the-envelope calculation of this, figuring out the radius of a sphere that would have the same surface area as our current land mass. It was only half the planet's present size. Using that radius to calculate the sphere's volume, it's possible to figure out the density (assuming a roughly current mass). That produced a figure six times higher than the Earth's current density — and about three times that of pure lead. I realize that a lot of the material in the Earth can be compressed under pressure, but I'm pretty skeptical that it can compress that much. And, more importantly, if Herndon wants to convince anyone that it did, this density difference is probably the sort of thing he should be addressing. He's not bothered; the idea that the continents once covered the surface of the Earth was put forward in 1933, and that's good enough for him.
Read the rest
Science in fiction affects our ability to understand science in real life. For instance, you might already be familiar with the idea that detective shows on TV, particularly forensics shows like CSI, might be influencing what juries expect to see in a courtroom.
This is called the "CSI effect" and it's hotly debated. Some prosecutors think it has a real impact on jury decisions—if they don't get the fancy, scientific evidence they've been conditioned to expect then they won't convict. Meanwhile, though, empirical evidence seems to show a more complicated pattern. Surveys of more than 2000 Michigan jurors found that, while people were heavily expecting to see some high-tech forensic evidence during trials, that expectation probably had more to do with the general proliferation of technology throughout society. More interestingly, that broad expectation didn't seem to definitively influence how jurors voted during a specific trial. In other words: The jury is still out. (*Puts on sunglasses*)
A FRONTLINE documentary that airs tomorrow centers around an interesting corollary on this issue: Whether or not shows like CSI influence juries to expect more technology, they do present a wildly inaccurate portrait of how accurate that technology is. The reality is, many of the tools and techniques used in detective work have never been scientifically verified. We don't know that they actually tell us what they purport to tell us. Other forensic technologies do work, but only if you use them right—and there's no across-the-board standard guaranteeing that happens.
Even ideas you think you can trust implicitly—like fingerprint evidence—turn out to have serious flaws that are seriously under-appreciated by cops, lawyers, judges, and juries.
Brandon Mayfield, an Oregon lawyer, was at the center of international controversy in 2004 after the FBI and an independent analyst incorrectly matched his prints to a partial print found on a bag of detonators from the Madrid terrorist bombings.
Dror asked five fingerprint experts to examine what they were told were the erroneously matched prints of Mayfield. In fact, they were re-examining prints from their own past cases. Only one of the experts stuck by their previous judgments. Three reversed their previous decisions and one deemed them “inconclusive.”
Dror’s argument is that these competent and well-meaning experts were swayed by “cognitive bias”: what they knew (or thought they knew) about the case in front of them swayed their analysis. The Mayfield case and studies like Dror’s have changed how fingerprints are used in the criminal justice system. The FBI no longer testifies that fingerprints are 100 percent infallible.
The Real CSI episode of FRONTLINE airs tomorrow, April 17th. Check out the FRONTLINE website for more information.
Image: Fingerprint developed with black magnetic powder on a cool mint Listerine oral care strip, a Creative Commons Attribution (2.0) image from jackofspades's photostream.