Video of a dog barking at herself barking at herself. On video. (via Laughing Squid)
Edward Sharp-Paul's An Opinion Piece On A Controversial Topic is some pretty awesome meta ("I was inspired to write this piece by Currently Fashionable Polemicist, who summarised the Issue better than I could when they said 'oversimplification that makes me feel smart'. I have a strong opinion on this Issue, and my sharing it with you at this time is in no way attributable to opportunism on my part."). But it really leaps into full-flight when you hit the comments ("Do not understand why you wrote about this Issue, when this other Issue exists.").
Read the rest
Read the rest
We also have new Community Guidelines for the Boing Boing BBS forums launching today.
Thank you all for your continued support of Boing Boing!
NEW YORK—Media consumers across the United States are reporting this week that sponsored content—articles and videos paid for by advertisers and distributed by print and digital publications—is easily the coolest fucking published material anyone could ever read or watch.
“I love, love, fucking love sponsored content,” said news and entertainment reader Erica Olson, adding that when she can tell a corporation is financially behind a piece of writing, she is even more inclined to click on it. “First off, it’s cool. That’s not debatable. Second, I don’t find it in any way insulting to my intelligence. In fact, it makes me feel smarter. And third, did I mention that sponsored content is just really fucking cool?”
A fellow was purportedly recording a police chase on TV when the chase went right by his house. In the comments thread on a much longer video of the chase, a commenter says that at 13:05, you can see the fellow looking out his window (screengrab at right).
Is coffee bad for you or good for you? Does acupuncture actually work, or does it produce a placebo effect? Do kids with autism have different microbes living in their intestines, or are their gut flora largely the same as neurotypical children? These are all good examples of topics that have produced wildly conflicting results from one study to another. (Side-note: This is why knowing what a single study says about something doesn't actually tell you much. And, frankly, when you have a lot of conflicting results on anything, it's really easy for somebody to pick the five that support a given hypothesis and not tell you about the 10 that don't.)
But why do conflicting results happen? One big factor is experimental design. Turns out, there's more than one way to study the same thing. How you set up an experiment can have a big effect on the outcome. And if lots of people are using different experimental designs, it becomes difficult to accurately compare their results. At the Wonderland blog, Emily Anthes has an excellent piece about this problem, using the aforementioned research on gut flora in kids with autism as an example.
For instance, in studies of autism and microbes, investigators must decide what kind of control group they want to use. Some scientists have chosen to compare the guts of autistic kids to those of their neurotypical siblings while others have used unrelated children as controls. This choice of control group can influence the strength of the effect that researchers find–or whether they find one at all.
Scientists also know that antibiotics can have profound and long-lasting effects on our microbiomes, so they agree on the need to exclude children from these studies who have taken antibiotics recently. But what’s recently? Within the last week? Month? Three months? Each investigator has to make his or her own call when designing a study.
Then there’s the matter of how researchers collect their bacterial samples. Are they studying fecal samples? Or taking samples from inside the intestines themselves? The bacterial communities may differ in samples taken from different places.
Some pseudoscience is pretty obvious. I think most of us are comfortable saying that the world will probably not end this December, in accordance with any ancient prophecy. But distinguishing fact from fiction isn't always simple. In fact, "fact from fiction" might be too simple a way to even frame the question. In reality, we're sometimes tasked with spotting misapplication of real science. Sometimes, we have to tell the difference between a complicated thing that nobody understands yet very well but which is likely to be true and a complicated thing that nobody understands yet very well but which is not likely to be true.
Basically, it's messy.
Emily Willingham at Forbes has some helpful hints for how to make these distinctions. She offers ten questions that can serve as guidelines for approaching new topics you're skeptical of — questions that, taken all together, can help you see the patterns of pseudoscience and make informed decisions for yourself and your family.
3. What kind of language does it use? Does it use emotion words or a lot of exclamation points or language that sounds highly technical (amino acids! enzymes! nucleic acids!) or jargon-y but that is really meaningless in the therapeutic or scientific sense? If you’re not sure, take a term and google it, or ask a scientist if you can find one. Sometimes, an amino acid is just an amino acid. Be on the lookout for sciencey-ness. As Albert Einstein once pointed out, if you can’t explain something simply, you don’t understand it well. If peddlers feel that they have to toss in a bunch of jargony science terms to make you think they’re the real thing, they probably don’t know what they’re talking about, either.
9. Were real scientific processes involved? Evidence-based interventions generally go through many steps of a scientific process before they come into common use. Going through these steps includes performing basic research using tests in cells and in animals, clinical research with patients/volunteers in several heavily regulated phases, peer-review at each step of the way, and a trail of published research papers. Is there evidence that the product or intervention on offer has been tested scientifically, with results published in scientific journals? Or is it just sciencey-ness espoused by people without benefit of expert review of any kind?
Read the rest at Willingham's Forbes blog, The Science Consumer
Here's an issue we don't talk about enough. Every year, peer-reviewed research journals publish hundreds of thousands of scientific papers. But every year, several hundred of those are retracted — essentially, unpublished. There's a number of reasons retraction happens. Sometimes, the researchers (or another group of scientists) will notice honest mistakes. Sometimes, other people will prove that the paper's results were totally wrong. And sometimes, scientists misbehave, plagiarizing their own work, plagiarizing others, or engaging in outright fraud. Officially, fraud only accounts for a small proportion of all retractions. But the number of annual retractions is growing, fast. And there's good reason to think that fraud plays a bigger role in science then we like to think. In fact, a study published a couple of weeks ago found that there was misconduct happening in 3/4ths of all retracted papers. Meanwhile, previous research has shown that, while only about .02% of all papers are retracted, 1-2% of scientists admit to having invented, fudged, or manipulated data at least once in their careers.
The trouble is that dealing with this isn't as simple as uncovering a shadowy conspiracy or two. That's not really the kind of misconduct we're talking about here.
Even if you don’t immediately recognize the words “prion” or “Kuru”, the history of these pathologies has seeped into popular culture like a horrifying fairy tale. But it’s true: a tribe in New Guinea ate the dead, not as Hollywood-style savages but to respect the dead. Upon death, you took a part of them into yourself. And that included the brain.Read the rest
How you read matters as much as what you read. That's because nothing is written in a vacuum. Every news story or blog post has a perspective behind it, a perspective that shapes what you are told and how that information is conveyed. This is not, necessarily, a bad thing. Having a perspective doesn't mean being sensationalistic, or deceitful, or spreading propaganda. It can mean those things, but it doesn't have to. In fact, I'm fairly certain that it's impossible to tell any story without some kind of perspective. When you relate facts, even in your personal life, you make choices about what details you will emphasize, what emotions you'll convey, who you will speak to—and all of those decisions are based on your personal perspective. How we tell a story depends on what we think is important.
Unfortunately, sometimes, perspective can be misleading. That's why it's important to be aware that perspective exists. If you look at what you're reading, you can see the decisions the author made, you can get an idea of what perspective they were trying to convey, and you will know whether that perspective is likely to distort the facts.
Emily Willingham is a scientist who blogs about science for the general public. Over at Double X Science, she's come up with a handy, six-step guide for reading science news stories. These rules are a great tool for peeking behind the curtain, and learning to think about the perspective behind what you read. In the post, she explains why each of these rules is important, and then applies them to a recent news story about chemical exposure and autism.
3. Look at the words in the articles. Suspected. Suggesting a link. In other words, what you're reading below those headlines does not involve studies linking anything to autism. Instead, it's based on an editorial listing 10 compounds [PDF] that the editorial authors suspect might have something to do with autism (NB: Both linked stories completely gloss over the fact that most experts attribute the rise in autism diagnoses to changing and expanded diagnostic criteria, a shift in diagnosis from other categories to autism, and greater recognition and awareness--i.e., not to genetic changes or environmental factors. The editorial does the same). The authors do not provide citations for studies that link each chemical cited to autism itself, and the editorial itself is not focused on autism, per se, but on "neurodevelopmental" derailments in general.
4. Look at the original source of information. The source of the articles is an editorial, as noted. But one of these articles also provides a link to an actual research paper. The paper doesn't even address any of the "top 10" chemicals listed but instead is about cigarette smoking. News stories about this study describe it as linking smoking during pregnancy and autism. Yet the study abstract states that they did not identify a link, saying "We found a null association between maternal smoking and pregnancy in ASDs and the possibility of an association with a higher-functioning ASD subgroup was suggested." In other words: No link between smoking and autism. But the headlines and how the articles are written would lead you to believe otherwise.
The one rule of Willingham's that I would question is "Ask a Scientist", not because it's bad advice, but because it's not something most people can easily do. Twitter helps, but only if you're already tied into social networks of scientists and science writers. Again, most people aren't. If you want to connect to these networks, I'd recommend starting out by picking up a copy of The Open Laboratory, an annual anthology of the best science writing on the web. Use that to find scientists who write for the public and whose voice you enjoy. Add them in your social networks, and then add the people that those scientists are spending a lot of time talking to. That's the easiest way to connect with some trustworthy sources. And remember: An expert in one subject is not the same thing as an expert. It doesn't make sense to ask a mechanical engineer for their opinion on cancer treatments. It doesn't make sense to as an oncologist about building better engines.
Buy The Open Laboratory 2010 (the 2011 edition hasn't been published yet).
I am very pleased to announce that two BoingBoing posts made it into The Open Laboratory 2012, an anthology of the best science writing on the Internet.
The first was written by Lee Billings, an excellent guest blogger we hosted back in February. Lee wrote a lot of great posts about Kepler and the hunt for exoplanets and deserves huge kudos. Incredible Journey: Can We Reach the Stars Without Breaking the Bank? is the one that will be in the anthology.
Today, the fastest humans on Earth and in history are three elderly Americans, all of whom Usain Bolt could demolish in a footrace. They're the astronauts of Apollo 10, who in 1969 re-entered the Earth's atmosphere at a velocity of 39,897 kph upon their return from the Moon. At that speed you could get from New York to Los Angeles in less than six minutes. Seven years after Apollo 10, we hurled a probe called Helios II into an orbit that sends it swinging blisteringly deep into the Sun's gravity well. At its point of closest approach, the probe travels at almost 253,000 kph—the fastest speed yet attained by a manmade object. The fastest outgoing object, Voyager I, launched the year after Helios II. It's now almost 17 billion kilometers away, and travels another 17 kilometers further away each and every second. If it were headed toward Alpha Centauri (it's not), it wouldn't arrive for more than 70,000 years. Even then, it wouldn't be able to slow down. Of the nearest 500 stars scattered like sand around our own, most would require hundreds of thousands of years (or more) to reach with current technology.
Our second post is one of mine: Nuclear Energy 101: Inside the Black Box of Nuclear Power Plants. It's from our Fukushima coverage, and was published on March 12, a day after the nuclear reactors in Fukushima were damaged by an earthquake and tsunami.
For the vast majority of people, nuclear power is a black box technology. Radioactive stuff goes in. Electricity (and nuclear waste) comes out. Somewhere in there, we're aware that explosions and meltdowns can happen. Ninety-nine percent of the time, that set of information is enough to get by on. But, then, an emergency like this happens and, suddenly, keeping up-to-date on the news feels like you've walked in on the middle of a movie. Nobody pauses to catch you up on all the stuff you missed. As I write this, it's still not clear how bad, or how big, the problems at the Fukushima Daiichi power plant will be. I don't know enough to speculate on that. I'm not sure anyone does. But I can give you a clearer picture of what's inside the black box. That way, whatever happens at Fukushima, you'll understand why it's happening, and what it means.
Thanks to Open Laboratory editors Bora Zivkovic and Jennifer Ouellette. BoingBoing is honored to be included, and we're doubly happy to see the fine work of our guest bloggers recognized!
You can read all the posts that were selected. In fact, you should read them. They represent some truly wonderful work by journalists, scientists, and bloggers. Here's a link to the full list.
One of the things I enjoy about writing for BoingBoing is the opportunity it's giving me to learn how to write reviews of books. That's not something I'd ever done before I started writing here. And I'm only now getting around to experimenting with not only describing books I like, but figuring out how to talk about books I find to be flawed. Fair criticism is a difficult skill to learn.
That's why I'm sort of simultaneously terrified and in awe of this 1991 book review, published in the International Journal of Primatology. In it, anthropologist Matt Cartmill expresses his opinions about sociologist Donna Haraway's book Primate Visions. I don't know enough about either scholar, or the book, to have an opinion about whether Cartmill is right or wrong. But, wowow, is that a blistering review.
This is a book that contradicts itself a hundred times; but that is not a criticism of it, because its author thinks contradictions are a sign of intellectual ferment and vitality. This is a book that systematically distorts and selects historical evidence; but that is not a criticism, because its author thinks that all interpretations are biased, and she regards it as her duty to pick and choose her facts to favor her own brand of politics. This is a book full of vaporous, French-intellectual prose that makes Teilhard de Chardin sound like Ernest Hemingway by comparison; but that is not a criticism, because the author likes that sort of prose and has taken lessons in how to write it, and she thinks that plain, homely speech is part of a conspiracy to oppress the poor.
This is a book that clatters around in a dark closet of irrelevancies for 450 pages before it bumps accidentally into its index and stops; but that is not a criticism, either, because its author finds it gratifying and refreshing to bang unrelated facts together as a rebuke to stuffy minds. This book infuriated me; but that is not a defect in it, because it is supposed to infuriate people like me, and the author would have been happier still if I had blown out an artery. In short, this book is flawless, because all its deficiencies are deliberate products of art. Given its assumptions, there is nothing here to criticize. The only course open to a reviewer who dislikes this book as much as I do is to question its author’s fundamental assumptions—which are big-ticket items involving the nature and relationships of language, knowledge, and science.
Via Evgeny Morozov