Walk into a bookstore, and chances are you’ll see books divided into sections by genre. Romance, Science Fiction/Fantasy, Mystery, and so on. It’s the most common system of categorizing books, conversationally and from the data-management perspective of the book world. Genre is also incredibly limiting at times.
There are dozens upon dozens of subgenres across the genres of popular fiction (Romance, Crime, and Science Fiction/Fantasy, plus some others). Science Fiction gets sliced up into Space Opera, Mundane SF, Hard SF, Cyberpunk, Dieselpunk, etc. These subgenres can get hard to keep track of, especially since their boundaries are often porous, and even life-long fans often disagree on the borders between sub-genres, policing them inefficiently but with gusto. At times it’s fun to argue classifications, try to find exactly the right place to frame a piece so that its cultural and narrative context is most clear. And narrow sub-genres can be useful for putting works into clusters for conversation, but it’s also really easy to slice so thin that the discussion becomes obscure or self-serving rather than practical.
Say, I'll bet our readers will love this wonderful thing! I'll just spend 20 minutes meticulously creating this gif. Sure hope no one else saw this wonderful thing yet. ::rechecks scheduled posts:: Read the rest
Edward Sharp-Paul's An Opinion Piece On A Controversial Topic is some pretty awesome meta ("I was inspired to write this piece by Currently Fashionable Polemicist, who summarised the Issue better than I could when they said 'oversimplification that makes me feel smart'. I have a strong opinion on this Issue, and my sharing it with you at this time is in no way attributable to opportunism on my part."). But it really leaps into full-flight when you hit the comments ("Do not understand why you wrote about this Issue, when this other Issue exists."). Read the rest
We also have new Community Guidelines for the Boing Boing BBS forums launching today.
Thank you all for your continued support of Boing Boing! Read the rest
NEW YORK—Media consumers across the United States are reporting this week that sponsored content—articles and videos paid for by advertisers and distributed by print and digital publications—is easily the coolest fucking published material anyone could ever read or watch.
“I love, love, fucking love sponsored content,” said news and entertainment reader Erica Olson, adding that when she can tell a corporation is financially behind a piece of writing, she is even more inclined to click on it. “First off, it’s cool. That’s not debatable. Second, I don’t find it in any way insulting to my intelligence. In fact, it makes me feel smarter. And third, did I mention that sponsored content is just really fucking cool?”
Is coffee bad for you or good for you? Does acupuncture actually work, or does it produce a placebo effect? Do kids with autism have different microbes living in their intestines, or are their gut flora largely the same as neurotypical children? These are all good examples of topics that have produced wildly conflicting results from one study to another. (Side-note: This is why knowing what a single study says about something doesn't actually tell you much. And, frankly, when you have a lot of conflicting results on anything, it's really easy for somebody to pick the five that support a given hypothesis and not tell you about the 10 that don't.)
But why do conflicting results happen? One big factor is experimental design. Turns out, there's more than one way to study the same thing. How you set up an experiment can have a big effect on the outcome. And if lots of people are using different experimental designs, it becomes difficult to accurately compare their results. At the Wonderland blog, Emily Anthes has an excellent piece about this problem, using the aforementioned research on gut flora in kids with autism as an example.
Read the rest
For instance, in studies of autism and microbes, investigators must decide what kind of control group they want to use. Some scientists have chosen to compare the guts of autistic kids to those of their neurotypical siblings while others have used unrelated children as controls. This choice of control group can influence the strength of the effect that researchers find–or whether they find one at all.
Some pseudoscience is pretty obvious. I think most of us are comfortable saying that the world will probably not end this December, in accordance with any ancient prophecy. But distinguishing fact from fiction isn't always simple. In fact, "fact from fiction" might be too simple a way to even frame the question. In reality, we're sometimes tasked with spotting misapplication of real science. Sometimes, we have to tell the difference between a complicated thing that nobody understands yet very well but which is likely to be true and a complicated thing that nobody understands yet very well but which is not likely to be true.
Basically, it's messy.
Emily Willingham at Forbes has some helpful hints for how to make these distinctions. She offers ten questions that can serve as guidelines for approaching new topics you're skeptical of — questions that, taken all together, can help you see the patterns of pseudoscience and make informed decisions for yourself and your family.
Read the rest
3. What kind of language does it use? Does it use emotion words or a lot of exclamation points or language that sounds highly technical (amino acids! enzymes! nucleic acids!) or jargon-y but that is really meaningless in the therapeutic or scientific sense? If you’re not sure, take a term and google it, or ask a scientist if you can find one. Sometimes, an amino acid is just an amino acid. Be on the lookout for sciencey-ness. As Albert Einstein once pointed out, if you can’t explain something simply, you don’t understand it well.
Here's an issue we don't talk about enough. Every year, peer-reviewed research journals publish hundreds of thousands of scientific papers. But every year, several hundred of those are retracted — essentially, unpublished. There's a number of reasons retraction happens. Sometimes, the researchers (or another group of scientists) will notice honest mistakes. Sometimes, other people will prove that the paper's results were totally wrong. And sometimes, scientists misbehave, plagiarizing their own work, plagiarizing others, or engaging in outright fraud. Officially, fraud only accounts for a small proportion of all retractions. But the number of annual retractions is growing, fast. And there's good reason to think that fraud plays a bigger role in science then we like to think. In fact, a study published a couple of weeks ago found that there was misconduct happening in 3/4ths of all retracted papers. Meanwhile, previous research has shown that, while only about .02% of all papers are retracted, 1-2% of scientists admit to having invented, fudged, or manipulated data at least once in their careers.
The trouble is that dealing with this isn't as simple as uncovering a shadowy conspiracy or two. That's not really the kind of misconduct we're talking about here.
How you read matters as much as what you read. That's because nothing is written in a vacuum. Every news story or blog post has a perspective behind it, a perspective that shapes what you are told and how that information is conveyed. This is not, necessarily, a bad thing. Having a perspective doesn't mean being sensationalistic, or deceitful, or spreading propaganda. It can mean those things, but it doesn't have to. In fact, I'm fairly certain that it's impossible to tell any story without some kind of perspective. When you relate facts, even in your personal life, you make choices about what details you will emphasize, what emotions you'll convey, who you will speak to—and all of those decisions are based on your personal perspective. How we tell a story depends on what we think is important.
Unfortunately, sometimes, perspective can be misleading. That's why it's important to be aware that perspective exists. If you look at what you're reading, you can see the decisions the author made, you can get an idea of what perspective they were trying to convey, and you will know whether that perspective is likely to distort the facts.
Emily Willingham is a scientist who blogs about science for the general public. Over at Double X Science, she's come up with a handy, six-step guide for reading science news stories. These rules are a great tool for peeking behind the curtain, and learning to think about the perspective behind what you read. In the post, she explains why each of these rules is important, and then applies them to a recent news story about chemical exposure and autism. Read the rest