— FEATURED —
— FOLLOW US —
— POLICIES —
Except where indicated, Boing Boing is licensed under a Creative Commons License permitting non-commercial sharing with attribution
— FONTS —
Just a few minutes ago, researchers with NASA's MESSENGER mission announced the publication of data that strongly suggests the poles of Mercury contain significant quantities of frozen water.
On the one hand, this is not exactly new news. The possibility of water on Mercury has been a topic of research for something like 20 years. And scientific discoveries tend to move in little mincing steps, not giant leaps, so there have been lots of previous announcements about evidence supporting the hypothesis of water of Mercury — including very similar announcements from the MESSENGER team in December 2011 and March 2012. Your life will not change in any significant way because there is frozen water on Mercury. You probably won't even make a note to tell your children where you were the day NASA announced that ice most likely existed there.
But that doesn't mean this news isn't damned exciting. And it doesn't mean that the scientists involved shouldn't be giddy about it. We are, after all, talking about a mission that sent a spacecraft into orbit around another planet and has quite likely found frozen water sitting on a landscape that is hot enough to melt lead. What's more, they think that ice is covered in places by a thin layer of some coal or tar-like organic material. That is huge news. It's going to change textbooks. And because the scientists think both the ice and the organic material got to Mercury via collisions with asteroids and comets, it's going to be an important part of our ongoing efforts to understand how life begins on planets like Earth.
All of this makes for a really nice, topical lead-in to an essay Robert Gonzalez published on iO9 today. It's totally reasonable to be frustrated by the recent whiplash of hearing that Curiosity discovered something "Earth-shattering" on Mars, only to have that announcement quickly revised to something "interesting" and/or "not insignificant". But, Gonzalez argues, it's also reasonable for scientists to look at something that is merely not insignificant from the public perspective and see it, from their own perspective, as groundbreaking. In fact, he says, we want more scientists who get excited about their work, not fewer.
Read the rest
Even if you don't immediately recognize the words "prion" or "Kuru", the history has seeped into popular culture, like a horrifying fairy tale that just happens to be true. Once, there was a tribe in New Guinea that ate the dead. It wasn't the kind of fakey cannibalism you see in the movies, with hunters rushing out to spear people for sustenance. Instead, it was about respecting your elders. When a member of your family died, you ate them—you took a part of them into yourself. And that included the brain.
But over time, these people found themselves plagued with a terrible illness. Children and perfectly healthy adults, usually women, would suddenly begin to lose control of their limbs. They would jerk and shudder. Within weeks, they wouldn't be able to stand up at all. And then they died. Everybody who had those symptoms died.
Eventually, Western scientists would learn the awful truth. When the people from New Guinea ate their ancestors they were also eating a disease. It attacked their brains—riddling the tissue with holes. The New Guineans, the Fore people, called the disease kuru. In their language it meant "trembling" or "fear".
Today, we know a little bit more about the disease, kuru. We know it's not caused by a virus or a bacterium or a fungus. We know it's related to other brain-damaging diseases, including Creutzfeldt-Jakob disease, which turns healthy adults senile and kills them within a year of the onset of symptoms; scrapie, which affects sheep; and the dreaded bovine spongiform encephalopathy -- mad cow disease.
Tying all these diseases together is a scary little something called a prion. On August 16th, I attended a lecture by Jay Ingram, a Canadian journalist who has written a book about prion diseases, called Fatal Flaws. The lecture taught me a lot about prions, but it also taught me about some of the flaws inherent in trying to live-tweet a lecture as I'm listening to it. When the subject is so scary—and so confusing—even well-intentioned live tweets can go awry.
Read the rest
By now, many of you are probably aware that human behavior is one of the key factors behind some of the massive forest fires we've seen in recent years. The basic story goes like this: Under a natural cycle, periodic small fires sweep through forests, burning through small trees and dry brush. But if you prevent those fires from happening—as humans have done for around a century at this point—all that highly flammable stuff builds up. In the end, you're left with a giant tinderbox of a forest. The next time a fire does happen there, it's almost guaranteed to be much, much bigger and more destructive than the natural fires that forest is adapted to.
NPR has a very nice story about the science and history behind this problem, which forest fire experts call "The Smokey Bear Effect", after the cartoon Ursus the U.S. Forest Service has long used as part of its fire prevention campaign.
Its ill-advised fire prevention campaign.
And it was the experts who approved the all-out ban on fires in the Southwest. They got it wrong. That's the view of fire historian Stephen Pyne.
"The irony here is that the argument for setting these areas aside as national forests and parks was, to a large extent, to protect them from fire," Pyne says. "Instead, over time they became the major habitat for free-burning fire."
So instead of a few dozen trees per acre, the Southwestern mountains of New Mexico, Arizona, Colorado and Utah are now choked with trees of all sizes, and grass and shrubs. Essentially, it's fuel.
Over the past several years, even as fewer fires have struck the Southwest, they've burned more land. The U.S. Forest Service now spends about half its budget on firefighting.
It's worth noting that this is also a great example of why it's difficult to attribute specific events to global climate change. Increasingly hot, dry summers have certainly been a factor in creating the forest fires we've seen over the last few years. The last decade has been the hottest on record, and that has consequences. But it's not the only thing going on here. Climate change doesn't happen in a vacuum. Its effects interact with the effects of other decisions we make (and other natural events that happen to be taking place). So it's not enough to say what climate change will do. In order to make accurate predictions of risk, we have to think about the bigger picture and how climate change fits into it.
Via Finn Ryan
I'm completely fascinated by stories from the early days of electricity ... specifically, stories of experiments that went horribly (and sometimes, comically) wrong.
For me, it's a great reminder that, no matter how much of a sure-thing a technology like electricity seems in retrospect, there was always a point in history where the future was uncertain, where mistakes were made, and where even the "experts" didn't totally know what they were doing. In general, I think it's good to remind ourselves that the real history of innovation is a lot messier than high-school level textbooks make it out to be.
In this short video, retired University of Missouri engineering professor Michael Devaney tells the tale of how a group of engineering students—armed with an early-model Edison electric generator—burned their school's main academic building to the ground. At the heart of the disaster: An attempt to see how many light bulbs the generator could light at once. To paraphrase Devaney, everything was going okay until the fire reached the ROTC's supply of cannon powder.
Read more on my thoughts about the messy history of innovation, published in last weekend's New York Times Magazine.
Thanks to Robert Solorzano and The Missourian for the tip on this story!
A driver tried to drive into the Chaussee d'Antin La Fayette Metro station in Paris on Tuesday, reportedly having mistaken it for a subterranean parking garage. The driver, who gave his name as Johan, told AFP: "There's a sign saying 'Haussmann Parking' right in front (of the Metro entrance), and ... I made a mistake."
Parisian drives car down Metro stairs [AFP. Photo: REUTERS/Jacky Naegelen]
Science is the best method we have for understanding the world. That doesn't mean that everything scientists ever think they've figured out is correct. And it doesn't mean that we're doing science in the best way possible right now.
For a great illustration of this, I recommend reading Jonah Lehrer's new piece in WIRED, about the problems we run into as we learn more about individual parts of complex systems and then assume that we understand the big picture of how those parts work together. A lot of scientific research, particularly in medicine, operates off assumptions like this and it can lead to big mistakes. Case in point: Back pain. In this excerpt, Lehrer explains how MRI technology that allowed doctors to get a better look at the spines of people with back pain led them to make inaccurate conclusions about what was causing the back pain.
The lower back is an exquisitely complicated area of the body, full of small bones, ligaments, spinal discs, and minor muscles. Then there’s the spinal cord itself, a thick cable of nerves that can be easily disturbed. There are so many moving parts in the back that doctors had difficulty figuring out what, exactly, was causing a person’s pain. As a result, patients were typically sent home with a prescription for bed rest.
This treatment plan, though simple, was still extremely effective. Even when nothing was done to the lower back, about 90 percent of people with back pain got better within six weeks. The body healed itself, the inflammation subsided, the nerve relaxed.
Over the next few decades, this hands-off approach to back pain remained the standard medical treatment. That all changed, however, with the introduction of magnetic resonance imaging in the late 1970s. These diagnostic machines use powerful magnets to generate stunningly detailed images of the body’s interior. Within a few years, the MRI machine became a crucial diagnostic tool.
The view afforded by MRI led to a new causal story: Back pain was the result of abnormalities in the spinal discs, those supple buffers between the vertebrae. The MRIs certainly supplied bleak evidence: Back pain was strongly correlated with seriously degenerated discs, which were in turn thought to cause inflammation of the local nerves. Consequently, doctors began administering epidurals to quiet the pain, and if it persisted they would surgically remove the damaged disc tissue.
But the vivid images were misleading. It turns out that disc abnormalities are typically not the cause of chronic back pain. The presence of such abnormalities is just as likely to be correlated with the absence of back problems, as a 1994 study published in The New England Journal of Medicine showed. The researchers imaged the spinal regions of 98 people with no back pain. The results were shocking: Two-thirds of normal patients exhibited “serious problems” like bulging or protruding tissue. In 38 percent of these patients, the MRI revealed multiple damaged discs. Nevertheless, none of these people were in pain. The study concluded that, in most cases, “the discovery of a bulge or protrusion on an MRI scan in a patient with low back pain may frequently be coincidental.”
This is a complicated problem without a clear solution right now. But we definitely need to have discussions like this so that we can work toward making science and medicine better.
Via Espen in Submitterator
I love Gary Schwitzer, a former journalism professor at the University of Minnesota and a key advocate for better health and medical reporting at HealthNewsReview.org. Schwitzer has a quick list of the most common mistakes reporters make when writing about medical science, and I think it's something that everybody should take a look at.
Why does this bit of journalism inside-baseball matter to you? Simple. If you know how journalists are most likely to screw up, you'll be less likely to be led astray by those mistakes. And that matters a lot, especially when it comes to health science, where people are likely to make important decisions based partly on what they read in the media.
Absolute versus relative risk/benefit data
Many stories use relative risk reduction or benefit estimates without providing the absolute data. So, in other words, a drug is said to reduce the risk of hip fracture by 50% (relative risk reduction), without ever explaining that it’s a reduction from 2 fractures in 100 untreated women down to 1 fracture in 100 treated women. Yes, that’s 50%, but in order to understand the true scope of the potential benefit, people need to know that it’s only a 1% absolute risk reduction (and that all the other 99 who didn’t benefit still had to pay and still ran the risk of side effects).
Association does not equal causation
A second key observation is that journalists often fail to explain the inherent limitations in observational studies – especially that they can not establish cause and effect. They can point to a strong statistical association but they can’t prove that A causes B, or that if you do A you’ll be protected from B. But over and over we see news stories suggesting causal links. They use active verbs in inaccurately suggesting established benefits.
How we discuss screening tests
The third recurring problem I see in health news stories involves screening tests. ... “Screening,” I believe, should only be used to refer to looking for problems in people who don’t have signs or symptoms or a family history. So it’s like going into Yankee Stadium filled with 50,000 people about whom you know very little and looking for disease in all of them. ... I have heard women with breast cancer argue, for example, that mammograms saved their lives because they were found to have cancer just as their mothers did. I think that using “screening” in this context distorts the discussion because such a woman was obviously at higher risk because of her family history. She’s not just one of the 50,000 in the general population in the stadium. There were special reasons to look more closely in her. There may not be reasons to look more closely in the 49,999 others.
In early 2011, Felisa Wolfe-Simon published a scientific paper purporting to show evidence that bacteria from California's Mono Lake could, if pressed, live without the essential element phosphorous, and use arsenic, instead.
The story was wildly misconstrued in the press. (No, nobody ever found alien life happening naturally in Lake Mono.) And the evidence and methodology of Wolfe-Simon's research was roundly trounced, not just in academic journals, but also in blogs.
And that's all left Wolfe-Simon in a very weird position. She's certainly not the first scientist to publish a high-profile paper that other researchers tore to shreds. But, because the "arsenic life" story was so high-profile, she's now worried her career might be over. Is that fair? In Popular Science, Tom Clynes presents a nuanced profile of Felisa Wolfe-Simon that doesn't really answer that question definitively. Frankly, there probably isn't a really clear black/white answer out there. But Clynes does do a really good job of introducing us to Wolfe-Simon as a person, and her story exposes flaws in the peer-review process and the traditional avenues of scientific debate, indicts the media and PR professionals for creating the very sensational story that led to such a harsh response, and shows what can happen when a scientist is unprepared to deal with the public presentation of their own work.
In other words, this story is about lots of people making mistakes, including, but not limited to, Felisa Wolfe-Simon.
In June, Science reported that Wolfe-Simon had left Oremland’s USGS laboratory to look for a location with better molecular and genetic research facilities. “Actually,” Wolfe-Simon says, “I didn’t leave out of choice. Ron basically evicted me from the group. It was a political decision on his part that I don’t understand, and I didn’t see it coming.” Although she received a NASA fellowship in 2010 that provides support through 2013, she is still seeking a new home for her work.
I find it hard not to feel sympathy for her. In a matter of weeks she was catapulted to fame, then singled out and assaulted with professional and personal criticism, some of which resulted from missteps beyond her control. Wolfe-Simon is an early-career researcher in a field dominated by older men. Few scientists, no matter how established, would have the skills to navigate the situation that she found herself in. What made the level of criticism so extraordinary is that the paper, in itself, is not so flawed that it should not have been published. The argument was compelling, the conclusions were measured, the data was thorough, and the paper made it through the same peer-review process as other articles in Science.
It will take a few years to better answer the questions surrounding GFAJ-1. In the meantime, Benner—who says he would be “more than astonished” if arsenic replaces phosphorus in any genetically relevant molecule in GFAJ-1—says Wolfe-Simon’s hypothesis is ultimately useful if it motivates people to look in new places and ask bigger questions.
Wolfe-Simon says the paper’s publicity attracted new collaborators who she wouldn’t have otherwise met, some of whom are already analyzing GFAJ-1. And her fame has played out in surprising ways. Recently, her husband, Jonathan, an engineer, was speaking with a colleague who asked if he happened to be married to Felisa Wolfe-Simon. When he said yes, the colleague said, “My seven-year-old daughter dressed up as Felisa for her school’s science day!” The girl wore a sun hat, with her pants rolled up and flip-flops on her feet, dressed for a day wading the waters of Mono Lake in search of bacteria.