Say you're a marine biologist and you want to study the little bitty creatures of the sea — shrimps and worms and things like that. How do you go about capturing them?
Why, with an underwater vacuum, of course.
At the PNAS First Look blog, David Harris writes that this "SCUBA-tank powered vacuum, called an “airlift,” inhales shrimp, sand fleas, marine worms, and 'things that would swim away if they had the chance.'"
The town of Macapá is in the north of Brazil, on the coast, where the Amazon River flows into the Atlantic.Read the rest
John Mark Ockerbloom's "From Wikipedia to our libraries" is a fabulous proposal for creating research synergies between libraries and Wikipedia, by adding templates to Wikipedia articles that direct readers to unique, offline-only (or onsite-only) library resources at their favorite local libraries. Ockerbloom's approach acknowledges and respects the fact that patrons start their searches online, and seeks only to improve the outcomes of their research -- not to convince them not to start with the Internet.
So how do we get people from Wikipedia articles to the related offerings of our local libraries? Essentially we need three things: First, we need ways to embed links in Wikipedia to the libraries that readers use. (We can’t reasonably add individual links from an article to each library out there, because there are too many of them– there has to be a way that each Wikipedia reader can get to their own favored libraries via the same links.) Second, we need ways to derive appropriate library concepts and local searches from the subjects of Wikipedia articles, so the links go somewhere useful. Finally, we need good summaries of the resources a reader’s library makes available on those concepts, so the links end up showing something useful. With all of these in place, it should be possible for researchers to get from a Wikipedia article on a topic straight to a guide to their local library’s offerings on that topic in a single click.
I’ve developed some tools to enable these one-click Wikipedia -> library transitions. For the first thing we need, I’ve created a set of Wikipedia templates for adding library links. The documentation for the Library resources box template, for instance, describes how to use it to create a sidebar box with links to resources about (or by) the topic of a Wikipedia article in a reader’s library, or in another library a reader might want to consult. (There’s also an option for direct links to my Online Books Page, if there are relevant books online; it may be easier in some cases for readers to access those than to access their local library’s books.)
We've talked here before about the crazy things you can find when you read the "Methods" section of a scientific research paper. (Ostensibly, that's the boring part.)
If you want a quick laugh this morning — or if you want to get a peek at how the sausages are made — check out the Twitter hashtag #overlyhonestmethods, where scientists are talking about the backstory behind seemingly dry statements like "A population of male rats was chosen for this study".
The stuff you use to make sex a little more smooth might have some serious drawbacks. Nothing has been proven yet — most of the data comes from disembodied cell cultures and animal testing, which doesn't necessarily give you an accurate picture of what's happening in humans — but several studies over the last few years have drawn connections between lubricant use and increased rates of STD transmission. (It also looks like some lubricants might kill off natural vaginal flora — the good bacteria that live "up there" and make the difference between a healthy vagina and, say, a raging yeast infection.)
Some of these studies have provided evidence suggesting that the ingredients in lubricants damage the cells lining the vagina and rectum — which would explain why those lubricants might facilitate STD transmission.
At Chemical and Engineering News, Lauren Wolf has a really well-researched, well-written story that will give you the low-down on this research without hype and without fear-mongering. Her story is easy to understand and explains what we know, what we don't know, and why this matters (besides the obvious, lubricants have been proposed as a possible means of applying topical anti-microbial STD preventatives).
Right now, the Food & Drug Administration doesn’t typically require testing of personal lubricants in humans. The agency classifies them as medical devices, so the sex aids have to be tested on animals such as rabbits and guinea pigs. Rectal use of lubricants is viewed by the agency as an “off-label” application—use at your own risk.
Questions about lubricant safety arose nearly a decade ago when microbicide developers were testing whether the detergent nonoxynol-9 could block HIV transmission. Manufacturers had been incorporating the compound into spermicidal lubricants for years because of its ability to punch holes in the cell membranes of sperm. In 2002, however, a Phase II/III clinical trial of a nonoxynol-9 vaginal gel failed to protect women from HIV infection. Not only that, but the detergent actually increased the risk of HIV infection in the sex workers tested—women living in countries such as South Africa and Thailand who used the product three or four times per day.
Lab work eventually revealed the reason for the paradoxical increase: Nonoxynol-9 is so good at punching holes in cell membranes that it not only bores into sperm but also into the cells lining the vagina and rectum. The mucosal lining of the vagina is a good barrier to infection all by itself, says Richard A. Cone, a biophysicist at Johns Hopkins University. But if that barrier gets compromised, all bets are off, he explains. After nonoxynol-9—still used on some condoms today—went from promising microbicide candidate to malevolent cell killer, scientists like Cone began to question the safety of other supposedly innocuous spermicide and personal lubricant ingredients.
Via David Kroll
Last week, after giving myself an initial overview of the scientific research on how gun ownership and gun laws affect violent crime, I told you that it seems like there's not a solid consensus on this issue. At least not in the United States. Different studies, of different laws, in different places seem to produce a wide variety of results.
On the one hand, this is kind of to be expected with social science. People are hard to pin down. Harder, often, than the Higgs Boson particle. And you can't just do a clean, controlled laboratory study of these issues. Instead, you're left trying to compare specific places, laws, and enforcement techniques that may not be easily comparable, in an attempt to draw a broad conclusion. That's hard.
But, it seems, the National Rifle Association has gone out of its way to make this work even more difficult than it would otherwise be. Since the early 1990s, NRA-backed politicians have attacked firearms research they believe is biased against guns. Alex Seitz-Wald at Salon.com wrote a piece on this back in July, after an earlier mass shooting. He describes how a vaguely worded clause has lead researchers to avoid doing firearms studies at all, for fear of losing their funding.
The Centers for Disease Control funds research into the causes of death in the United States, including firearms — or at least it used to. In 1996, after various studies funded by the agency found that guns can be dangerous, the gun lobby mobilized to punish the agency. First, Republicans tried to eliminate entirely the National Center for Injury Prevention and Control, the bureau responsible for the research. When that failed, Rep. Jay Dickey, a Republican from Arkansas, successfully pushed through an amendment that stripped $2.6 million from the CDC’s budget (the amount it had spent on gun research in the previous year) and outlawed research on gun control with a provision that reads: “None of the funds made available for injury prevention and control at the Centers for Disease Control and Prevention may be used to advocate or promote gun control.”
Dickey’s clause, which remains in effect today, has had a chilling effect on all scientific research into gun safety, as gun rights advocates view “advocacy” as any research that notices that guns are dangerous. Stephen Teret, who co-directs the Johns Hopkins Center for Gun Policy and Research, told Salon: “They sent a message and the message was heard loud and clear. People [at the CDC], then and now, know that if they start going down that road, their budget is going to be vulnerable. And the way public agencies work, they know how this works and they’re not going to stick their necks out.”
In January, the New York Times reported that the CDC goes so far as to “ask researchers it finances to give it a heads-up anytime they are publishing studies that have anything to do with firearms. The agency, in turn, relays this information to the NRA as a courtesy.”
Via Dave Ng
Does gun control mean fewer guns on the street and less violence? Does encouraging gun ownership mean better protected people and less violence?
I don't think it's too early to be asking questions like this. When you're faced with a tragedy like what happened today at Sandy Hook Elementary School, it's reasonable to start asking questions about violence prevention. It's part of the bargaining stage of grief — wondering if there's something we could have done that would have prevented all those needless deaths. And let's get one thing straight: Everybody wants to prevent what happened today.
So what can be done about it? And what does the science say?
I've been trying to get a handle on that for the last hour or so and here are three things it seems we can definitively say:
• It would be completely accurate for someone to tell you that studies in places like Australia and Austria found that implementing more stringent gun control laws reduced deaths from gun-related suicides and violent crime.
• It would also be accurate to say that a study of the effects of the Brady Handgun Violence Prevention Act in the United States showed no big reductions in gun-related deaths, except for suicides among people older than 55.
• And it's also true that a 2003 study of conceal-carry laws in Florida found that they seemed to make no difference one way or the other — neither increasing nor reducing rates of violent crime.
Read the rest
Is coffee bad for you or good for you? Does acupuncture actually work, or does it produce a placebo effect? Do kids with autism have different microbes living in their intestines, or are their gut flora largely the same as neurotypical children? These are all good examples of topics that have produced wildly conflicting results from one study to another. (Side-note: This is why knowing what a single study says about something doesn't actually tell you much. And, frankly, when you have a lot of conflicting results on anything, it's really easy for somebody to pick the five that support a given hypothesis and not tell you about the 10 that don't.)
But why do conflicting results happen? One big factor is experimental design. Turns out, there's more than one way to study the same thing. How you set up an experiment can have a big effect on the outcome. And if lots of people are using different experimental designs, it becomes difficult to accurately compare their results. At the Wonderland blog, Emily Anthes has an excellent piece about this problem, using the aforementioned research on gut flora in kids with autism as an example.
For instance, in studies of autism and microbes, investigators must decide what kind of control group they want to use. Some scientists have chosen to compare the guts of autistic kids to those of their neurotypical siblings while others have used unrelated children as controls. This choice of control group can influence the strength of the effect that researchers find–or whether they find one at all.
Scientists also know that antibiotics can have profound and long-lasting effects on our microbiomes, so they agree on the need to exclude children from these studies who have taken antibiotics recently. But what’s recently? Within the last week? Month? Three months? Each investigator has to make his or her own call when designing a study.
Then there’s the matter of how researchers collect their bacterial samples. Are they studying fecal samples? Or taking samples from inside the intestines themselves? The bacterial communities may differ in samples taken from different places.
Sometimes, it's hard to find people interested in playing the role of guinea pig for the sake of science. And, sometimes, that job is not so hard. Like when what you want the guinea pigs to do is get real high. That's a good example.
Pot-based research isn't all fun and games. Given the interest in medical marijuana for cancer patients and people with AIDS, some of the studies require volunteers to, you know, have cancer or AIDS. Others are interested in the sociology — these scientists want to talk to you about your pot use and collect data about how it may or may not have affected your life.
But the mythical opportunity to "get high for science" really does exist, writes Brian Palmer at Slate.
The National Institutes of Health maintains an online database of clinical trials that are in the recruitment process. As of this writing, there are approximately 100 marijuana studies currently enrolling patients. Each listing contains inclusion criteria (the types of people the researchers are looking for) and exclusion criteria (characteristics that will remove otherwise qualified people from contention).
... there are a few trials that might interest someone looking for a free high. Consider the University of Iowa’s “Effects of Inhaled Cannabis on Driving Performance.” Participants will be dosed with varying amounts of alcohol or vaporized cannabis, then placed into a driving simulator to measure their performance. There are some restrictions. You must be a social drinker and marijuana user already, but you can’t have an addiction. People who are susceptible to motion sickness are out, and you must live near the driving simulator in Iowa. Keep in mind that getting into the study doesn’t guarantee free marijuana—two control groups will get no THC whatsoever. (Previous studies have shown that low doses of marijuana have little to no impact on driving performance.)
Let's talk about the pay gap. Census data show shows that, in 2008, American women still earned .77 cents for every $1 earned by American men. And, while some of this has to do with women working different jobs then men, working less hours, or spending less of their lives moving up the corporate ladder, numerous studies have shown that the disparity still exists even after you've controlled for all those factors, and more. Even in the same job, at the same level of experience, the same education, same race, same hours worked, etc. ... women still earn less than men do.
There's been lots of research aimed at explaining the gap, and it's probably tied to more than one factor. But several studies have shown that unfair bias against women — whether intended or subconscious — is part of it. Last week, researchers at Princeton published a study that showed bias against women in hiring practices within the sciences and hit on some particularly interesting aspects of subconscious discrimination.
The researchers gave the same application materials and resume to two sets of scientists and told the scientists to evaluate the candidate for a position as laboratory manager. Half the scientists got the materials with a male name attached. Half saw a female name. The scientists gave the male name a higher rating on competency, hireability, and their own willingness to mentor "him". They also offered "him" a higher starting salary — $30,238, compared to $26,507 for the female name.
The catch: These trends held regardless of whether the scientist doing the hiring was male or female, and none of the scientists used sexist language or sexist arguments as justification for their decisions. At the Unofficial Prognosis blog, Ilana Yurkiewicz explains why those details are so important:
Read the rest
Good news for ladies who like the woods—your period is (probably) not something that attracts (most) bears.
There are not a lot of studies addressing this particular topic, but a National Park Service paper published this year took a look at all of them and put the scattered pieces information together into a single puzzle. It's probably not a complete picture, but it's certainly better than hearsay and random, sexist stories you heard from your grandpa's drinking buddy. More importantly, even when there is a documented risk between menstrual blood and bears, that shouldn't be construed as a reason to keep women out of the wilderness. After all, bears are attracted to food, and we don't tell people they shouldn't eat while backpacking. Instead, we have practices that reduce risk. Same thing applies here.
Here's what we learn:
1) You can menstruate freely and without fear in the contiguous 48 United States. Grizzlies, and particularly black bears, don't seem to be interested in what's happening in your pants. Evaluating hundreds of grizzly attacks found no correlation between menstruation and risk of attack. In the case of black bears, this has actually been tested experimentally, with researchers leaving used tampons from various stages of menstruation out in the wilderness and watching how the bears respond. (Science!) The bears completely ignored the tampons.
2) Yellowstone data suggests food is a much bigger risk than menstruation. Analysis of bear attack data from Yellowstone National Park doesn't even consider attacks that happened before 1980. Why? Because that was before stringent rules on in-park food storage and bear feeding. The vast majority of pre-1980 attacks are already known to be related to bears seeking out human food. Meanwhile, between 1980 and 2011 only 9 women have been injured by bears in Yellowstone. Of those, six were incidents where women and bears ran into each other unexpectedly on hiking trails. In the other three incidents, which didn't rely on the element of surprise and are, thus, more likely to have attraction factors involved none of the women were menstruating at the time of the attack.
3) Polar bears are a whole 'nother story. Two different polar bear studies, one in captivity and one in the wild, have shown that those bears are attracted to human menstrual blood—even more than plain old human blood that wasn't related to menstruation. They are also attracted to the scent of seals and (again) human food.
Big picture: Food still seems to be a bigger issue in bear attacks than menstruating ladies. And, as with food, the Park Service has guidelines that you can follow for how to best deal with tampons while in the wilderness.
Read the full National Parks Service report, including the safety guidelines for women on their periods
Meet Richard Nolan: quartermaster of the Whydah, captain of the Anne, former coworker of Blackbeard—in general, pirate. He is also—at least through Labor Day—my friend Butch Roy.Read the rest
It does successfully prevent sunburn, but what about the evidence for sunscreen protecting you from skin cancer later in life?
The answer: Nobody is really sure. Last year, I wrote a short piece for BoingBoing that looked at this a little bit. The key point: Cancer takes a long time to happen and we haven't been using sunscreen long enough to have much evidence about it.
But, at Discover's The Crux blog, Emily Elert expands on some of the other problems in play. One of the key things—and something that will hopefully be fixed by this time next year—there's nothing on the sunblock you buy to tell you how protective it is against skin cancer. SPF is all about the burn. So even if some sunscreens do protect against cancer, you don't have a good way to know whether or not you're using one of them.
First of all, the way sunscreen’s effectiveness is measured—its SPF rating—basically only describes its ability to block UVB rays. That’s because UVB is the main cause of sunburn, and a sunscreen’s SPF stands for how long you can stay in the sun without getting a sunburn (a lotion that allows you to spend 40 minutes in the sun rather than the usual 20 before burning, for example, has an SPF of 2).
UVA rays can cause cancer but not sunburn, so they don’t factor into the SPF calculation. That means that if you slather on a high SPF sunscreen that only protects against UVB, you’d still absorb lots of UVA radiation, potentially increasing your long-term cancer risk.
Soon it will be easier to tell which sunscreens include ingredients that block or absorb UVA as well as UVB. According to FDA regulations passed last year, products that pass a “Critical Wavelength” test—meaning that they block wavelengths across the ultraviolet spectrum—will carry the label “Broad Spectrum” alongside the SPF, while sunscreens that don’t pass the test will be forbidden from claiming they have such capabilities. However, those regulations don’t go into effect until December, so for this summer, you’re still stuck with SPF. And, by the way, you probably need to apply twice as much sunscreen as you think to actually get an SPF as strong as that marked on the bottle: manufacturers test their products’ SPF with the assumption that you will slather on obscene amounts. This discrepancy could be contributing to the fact that the NIH, when looking the connection between sunscreen use and skin cancer in large populations, doesn’t see clear evidence that sunscreen is effective in reducing the risk of skin cancer. (It’s worth pointing out, too, that there is a clear genetic component in some skin cancers, so just avoiding sun or using sunscreen regularly are not the only factors that determine whether someone gets it.)
Image: Beer, cigarettes and sun block: Roskilde Festival 2009 essentials., a Creative Commons Attribution (2.0) image from wouterkiel's photostream
About 20 years ago, the United States and a few other countries started using a different pertussis vaccine than had been used previously. The change was in response to public fear about some very rare neurological disorders that may or may not have had a relationship to that older vaccine (it couldn't ever be proven one way or the other).
The vaccine we use today was created to get around any possible mechanism for those disorders and, along the way, ended up having lower rates of the less-troubling (and far, far more common) sort of side effects, as well. Think short-term redness, swelling, or pain at the site of injection.
The downside, reports Maryn McKenna, is that this new vaccine might not be as effective as the old one. In fact, scientists at the Centers for Disease Control, Kaiser Permanente Medical Center in San Rafael, Calif., and Australia's University of Queensland’s Children’s Medical Research Unit, are raising the possibility that a less effective vaccine could be part of why we're now seeing a big increase in pertussis outbreaks.
In the most recent research, a letter published Tuesday night in JAMA, researchers in Queensland, Australia examined the incidence of whooping cough in children who were born in 1998, the year in which that province began phasing out whole-cell pertussis vaccine (known as there as DTwP) in favor of less-reactive acellular vaccine (known as DTaP). Children who were born in that year and received a complete series of infant pertussis shots (at 2, 4 and 6 months) might have received all-whole cell, all-acellular, or a mix — and because of the excellent record-keeping of the state-based healthcare system, researchers were able to confirm which children received which shots.
The researchers were prompted to investigate because, like the US, Australia is enduring a ferocious pertussis epidemic. When they examined the disease history for 40,694 children whose vaccine history could be verified, they found 267 pertussis cases between 1999 and 2011. They said:
"Children who received a 3-dose DTaP primary course had higher rates of pertussis than those who received a 3-dose DTwP primary course in the preepidemic and outbreak periods. Among those who received mixed courses, rates in the current epidemic were highest for children receiving DTaP as their first dose. This pattern remained when looking at subgroups with 1 or 2 DTwP doses in the first year of life, although it did not reach statistical significance. Children who received a mixed course with DTwP as the initial dose had incidence rates that were between rates for the pure course DTwP and DTaP cohorts."
A key thing to remember: This is a nuanced theory that may or may not turn out to be right. But, if it does turn out that this vaccine isn't as effective as we want it to be, that's not a dark mark against vaccines, in general. Sometimes, medicine doesn't work as well as intended. It's a risk of medicine. And the fact that it's major research institutions pointing this possibility out, should give people some comfort in the scientific process. If doctors and organizations who promote childhood vaccination are all in the pockets of an evil conspiracy then there would be no reason why they'd ever do research like this, or talk about it publicly.
Scientists aren't always right. In fact, individual research papers turn out to be wrong pretty often and scientists are the first people to tell you that they don't know everything there is to know. They're just working on it with more rigor than most of us.
But scientists are also people. And sometimes, they lie. At Ars Technica, John Timmer looks at some of the most famous cases of scientific fraud and comes away with 8 key lessons that show us how science's biggest scam artists got away with faking their data—sometimes for years.
1) Fake data nobody ever expects to see. If you're going to make things up, you won't have any original data to produce when someone asks to see it. The simplest way to avoid this awkward situation is to make sure that nobody ever asks. You can do this in several ways, but the easiest is to work only with humans. Most institutions require a long and painful approval process before anyone gets to work directly with human subjects. To protect patient privacy, any records are usually completely anonymized, so no one can ever trace them back to individual patients. Adding to the potential for confusion, many medical studies are done double-blind and use patient populations spread across multiple research centers. All of these factors make it quite difficult for anyone to keep track of the original data, and they mean that most people will be satisfied with using a heavily processed version of your results.
3) Tell people what they already know. Since you don't want anyone excited about your work, due to the likelihood they will ask annoying questions, you need to avoid this reaction at all costs. Under no circumstances should your work cause anyone to raise an intrigued eyebrow. The easiest way to do this is to play to people's expectations, feeding them data that they respond to with phrases like "That's about what I was expecting." Take an uncontroversial model and support it. Find data that's consistent with what we knew decades ago. Whatever you do, don't rock the boat.
Over at Download the Universe, Ars Technica science editor John Timmer reviews a science ebook whose science leaves something to be desired.Read the rest