Particle physicists not yet willing to call the election for Obama

Sure, there's a 99.2% probability that he will win, but that is several standard deviations away from the 99.99995% confidence that the particle physicists would need to declare the election won. (This is satire, obviously.) Via Jennifer Ouellette.

Surviving a plane crash is surprisingly common

I'm a nervous flyer. But I'm a lot better at it then I used to be. That's because, a few years ago, I learned that it's actually pretty common to survive a plane crash. Like most people, I'd assumed that the safety in flying came from how seldom accidents happened. Once you were in a crash situation, though, I figured you were probably screwed. But that's not the case.

Looking at all the commercial airline accidents between 1983 and 2000, the National Transportation Safety Board found that 95.7% of the people involved survived. Even when they narrowed down to look at only the worst accidents, the overall survival rate was 76.6%. Yes, some plane crashes kill everyone on board. But those aren't the norm. So you're even safer than you think. Not only are crashes incredibly rare, you're more likely to survive a crash than not. In fact, out of 568 accidents during those 17 years, only 71 resulted in any fatalities at all.

I was talking about this fact with a pilot friend over the weekend, and he mentioned one crash in particular that is an excellent example of the statistics in action. On July 19, 1989, United Airlines Flight 232 lost all its hydraulic controls and landed in Sioux City, Iowa, going more than 100 mph faster than it should have been. You can see the plane breaking apart and bursting into flames in the video above. Turns out, that's what a 62% survival rate looks like. (All the pilots you can hear talking in the video survived, too.)

Read more about United Airlines 232 on Wikipedia

Read the full NTSB report from 2001

In 2007, Popular Mechanics examined 36 years of NTSB reports and found that the majority of surviving passengers were sitting in the back of the plane. But that seems to depend a lot on the specifics of the crash and may not be a reliable predictor of future results.

Thanks, Shav!

What Nate Silver is actually telling you about the election

The election is next week. And, with that in mind, Salon's Paul Campos has posted a helpful reminder explaining what the statistics at the fivethirtyeight blog actually mean (and what they don't).

In particular, you have to remember that, while Nate Silver gives President Obama a 77.4 percent chance of winning the presidential election, that's not the same thing as saying that Obama is going to win.

Suppose a weather forecasting model predicts that the chance of rain in Chicago tomorrow is 75 percent. How do we determine if the model produces accurate assessments of probabilities? After all, the weather in Chicago tomorrow, just like next week’s presidential election, is a “one-off event,” and after the event the probability that it rained will be either 100 percent or 0 percent. (Indeed, all events that feature any degree of uncertainty are one-off events – or to put it another way, if an event has no unique characteristics it also features no uncertainties).

The answer is, the model’s accuracy can be assessed retrospectively over a statistically significant range of cases, by noting how accurate its probabilistic estimates are. If, for example, this particular weather forecasting model predicted a 75 percent chance of rain on 100 separate days over the previous decade, and it rained on 75 of those days, then we can estimate the model’s accuracy in this regard as 100 percent. This does not mean the model was “wrong” on those days when it didn’t rain, any more than it will mean Silver’s model is “wrong” if Romney were to win next week.

What Silver is predicting, in effect, is that as of today an election between a candidate with Obama’s level of support in the polls and one with Mitt Romney’s level of support in those polls would result in a victory for the former candidate in slightly more than three out of every four such elections.

Read the full story at Salon.com

Fact-checking the RIAA's claim that the number of working musicians fell by 41%


Matthew Lasar's long Ars Technica feature, "Have we lost 41 percent of our musicians? Depends on how you (the RIAA) count" does an excellent job of digging into RIAA CEO Cary Sherman's claim that the number of working musicians in the USA has declined by 41 percent. After checking the RIAA's math, Lasar finds a gigantic discrepancy between the figures they cite and the conclusions they reach. But then Lasar delves further into the underlying sources, as well as government and industry stats, and finds that basically, the number of musicians working in America may have slightly declined, but is also projected to rise.

It is worth ending this cautionary tale with a review of the BLS's own occupational handbook projection for musician/singer employment in the near future. Note that the handbook cites a much higher employment figure for both trades in 2010 than mentioned in the above tables: about 176,200 musicians and singers. That's because it comes from the Bureau's National Employment Matrix, I was told, which adds additional data sources.

Employment for musicians and singers is expected to grow by ten percent over the decade—"about as fast as the average for all occupations," the government notes:

The number of people attending musical performances, such as orchestra, opera, and rock concerts, is expected to increase from 2010 to 2020. As a result, more musicians and singers will be needed to play at these performances.

There will be additional demand for musicians to serve as session musicians and backup artists for recordings and to go on tour. Singers will be needed to sing backup and to make recordings for commercials, films, and television.

Have we lost 41 percent of our musicians? Depends on how you (the RIAA) count

What a dead fish can teach you about neuroscience and statistics

The methodology is straightforward. You take your subject and slide them into an fMRI machine, a humongous sleek, white ring, like a donut designed by Apple. Then you show the subject images of people engaging in social activities — shopping, talking, eating dinner. You flash 48 different photos in front of your subject's eyes, and ask them to figure out what emotions the people in the photos were probably feeling. All in all, it's a pretty basic neuroscience/psychology experiment. With one catch. The "subject" is a mature Atlantic salmon.

And it is dead.

Read the rest

What cancer statistics actually mean

Genius science writer Ed Yong used to work for a cancer charity, so he's seen how the cancer research sausages get made. In a new post at Not Exactly Rocket Science, Ed takes you on a brief tour of the factory, explaining why even good data doesn't necessarily mean what you think it means.

The post is based around a new study that says 16.1% of all cancers worldwide are caused by infections. This statistic is talking about stuff like HPV—viruses and other infections that can prompt mutations in the cells they infect. Sometimes, those mutations propagate and become a tumor.

That statistic tells us that infections play a role in more cancers than most laypeople probably think, Ed says. It gives us an idea of the scale of the problem. But you have to be careful not to read too much into that 16.1%.

The latest paper tells us that 16.1% of cancers are attributable to infections. In 2006, a similar analysis concluded that 17.8% of cancers are attributable to infections. And in 1997, yet another study put the figure at 15.6%. If you didn’t know how the numbers were derived, you might think: Aha! A trend! The number of infection-related cancers was on the rise but then it went down again.

That’s wrong. All these studies relied on slightly different methods and different sets of data. The fact that the numbers vary tells us nothing about whether the problem of infection-related cancers has got ‘better’ or ‘worse’. (In this case, the estimates are actually pretty close, which is reassuring. I have seen ones that vary more wildly. Try looking for the number of cancers caused by alcohol or poor diets, if you want some examples).

And that's only one of the complications involved in understanding cancer statistics. You really should read Ed's entire post. After you do, a lot of apparent inconsistencies in cancer data will make a lot more sense to you. For instance: What about the cancers caused by radiation exposure?

Read the rest

Why water supply affects your computer

Between now and 2020, the greatest increases in population growth in the United States are projected to happen in the places that have the biggest problems with fresh water availability. This isn't just a drinking water problem, or even an agriculture problem. It's an energy issue, too. Most of our electricity is made by finding various ways to boil water, producing steam that turns a turbine in an electric generator. In 2000, we used as much fresh water to produce electricity as we used for irrigation—each sector represented 39% of our total water use. (From a poster at Lawrence Berkeley National Laboratory.)

Cybercrime sucks (for criminals)

Bruce Schneier comments on an NYT report on cybercrime that shows that there's just not much money to be had in being a ripoff artist. Dinei Florêncio and Cormac Herley wrote:

A cybercrime where profits are slim and competition is ruthless also offers simple explanations of facts that are otherwise puzzling. Credentials and stolen credit-card numbers are offered for sale at pennies on the dollar for the simple reason that they are hard to monetize. Cybercrime billionaires are hard to locate because there aren’t any. Few people know anyone who has lost substantial money because victims are far rarer than the exaggerated estimates would imply.

The authors frame cybercrime as a "tragedy of the commons," where the overfishing (overphishing) by crooks has reduced everyone's margins to nothing, making it hard graft indeed. Meanwhile, cybercrime estimates are subject to the same lobbynomics used to calculate losses from music downloading and profits from drug seizures:

Suppose we asked 5,000 people to report their cybercrime losses, which we will then extrapolate over a population of 200 million. Every dollar claimed gets multiplied by 40,000. A single individual who falsely claims $25,000 in losses adds a spurious $1 billion to the estimate. And since no one can claim negative losses, the error can't be canceled.

Cybercrime as a Tragedy of the Commons

Why the DHS's pre-crime biometric profiling is doomed to fail, and will doom passengers with its failures


In The Atlantic, Alexander Furnas debunks the DHS's proposal for a "precrime" screening system that will attempt to predict which passengers are likely to commit crimes, and single those people out for additional screening. FAST (Future Attribute Screening Technology) "will remotely monitor physiological and behavioral cues, like elevated heart rate, eye movement, body temperature, facial patterns, and body language, and analyze these cues algorithmically for statistical aberrance in an attempt to identify people with nefarious intentions." They'll build the biometric "bad intentions" profile by asking experimental subjects to carry out bad deeds and monitoring their vital signs. It's a mess, scientifically, and it will falsely accuse millions of innocent people of planning terrorist attacks.

First, predictive software of this kind is undermined by a simple statistical problem known as the false-positive paradox. Any system designed to spot terrorists before they commit an act of terrorism is, necessarily, looking for a needle in a haystack. As the adage would suggest, it turns out that this is an incredibly difficult thing to do. Here is why: let's assume for a moment that 1 in 1,000,000 people is a terrorist about to commit a crime. Terrorists are actually probably much much more rare, or we would have a whole lot more acts of terrorism, given the daily throughput of the global transportation system. Now lets imagine the FAST algorithm correctly classifies 99.99 percent of observations -- an incredibly high rate of accuracy for any big data-based predictive model. Even with this unbelievable level of accuracy, the system would still falsely accuse 99 people of being terrorists for every one terrorist it finds. Given that none of these people would have actually committed a terrorist act yet distinguishing the innocent false positives from the guilty might be a non-trivial, and invasive task.

Of course FAST has nowhere near a 99.99 percent accuracy rate. I imagine much of the work being done here is classified, but a writeup in Nature reported that the first round of field tests had a 70 percent accuracy rate. From the available material it is difficult to determine exactly what this number means. There are a couple of ways to interpret this, since both the write-up and the DHS documentation (all pdfs) are unclear. This might mean that the current iteration of FAST correctly classifies 70 percent of people it observes -- which would produce false positives at an abysmal rate, given the rarity of terrorists in the population. The other way of interpreting this reported result is that FAST will call a terrorist a terrorist 70 percent of the time. This second option tells us nothing about the rate of false positives, but it would likely be quite high. In either case, it is likely that the false-positive paradox would be in full force for FAST, ensuring that any real terrorists identified are lost in a sea of falsely accused innocents.

Homeland Security's 'Pre-Crime' Screening Will Never Work

(Image: Brockhaus and Efron Encyclopedic Dictionary, a Creative Commons Attribution (2.0) image from double-m2's photostream)

Danish trade minister and ACTA booster apologise for bogus piracy numbers

Here's a clip of a Danish TV show discussing ACTA, which Denmark has fiercely advocated in favor of. It starts with the head of a rightsholder society and the Danish trade minister quoting dodgy statistics about the extent and cost of piracy, and then demonstrates that these statistics are patently false, and finally, brings out those responsible for quoting them and gets them to admit their errors. Priceless.

You can see both the Danish Trade Minister and the head of a Danish music rights organization (and famous Danish musician) Ivan Pedersen appear on a TV show below (with English subtitles). On the show, a well-informed presenter focuses on how both of these ACTA defenders claimed that 95% of music downloaded in Denmark was unauthorized, and carefully shows how that's simply false -- and then gets both of the ACTA defenders to admit that the numbers were wrong.

Danish Trade Minister Apologizes For Using Bogus Industry Numbers To Support Pro-ACTA Argument

Facebook's funny accounting has "active users" who never use Facebook

The NYT's Andrew Ross Sorkin quotes Barry Ritholtz's digging into how Facebook's IPO documents define "active" users and finds that many of them may never visit the site. Facebook counts you as "active" if your only involvement with the service is setting it up to republish your Twitter feed, or if you click "Like" buttons but never log in to the actual service. This should matter to investors, since Facebook earns no advertising revenue from those users, though it may earn some other income by reselling the private details of their browsing habits as gleaned from its tracking cookies.

In other words, every time you press the “Like” button on NFL.com, for example, you’re an “active user” of Facebook. Perhaps you share a Twitter message on your Facebook account? That would make you an active Facebook user, too. Have you ever shared music on Spotify with a friend? You’re an active Facebook user. If you’ve logged into Huffington Post using your Facebook account and left a comment on the site — and your comment was automatically shared on Facebook — you, too, are an “active user” even though you’ve never actually spent any time on facebook.com.

“Think of what this means in terms of monetizing their ‘daily users,’ ” Barry Ritholtz, the chief executive and director for equity research for Fusion IQ, wrote on his blog. “If they click a ‘like’ button but do not go to Facebook that day, they cannot be marketed to, they do not see any advertising, they cannot be sold any goods or services. All they did was take advantage of FB’s extensive infrastructure to tell their FB friends (who may or may not see what they did) that they liked something online. Period.”

Those Millions on Facebook? Some May Not Actually Visit (via Memex 1.1)

How deadly is bird flu?

The Centers for Disease Control and Prevention, and the World Health Organization, say that H5N1 bird flu kills some 60% of the human beings it manages to infect. Basically, it hasn't infected many people—because it can't be spread from person to person—but most of the people it does infect die.

But this might not be the full story.

After I posted a summary of the current controversies surrounding H5N1 research, I got an interesting email from Vincent Racaniello, a professor of microbiology at Columbia University Medical Center. Racaniello points out that the 60% death rate statistics are based on people who show up at hospitals with serious symptoms of infection. So far, there've only been about 600 cases. And, yes, about 60% of them have died.

However, they don't necessarily represent everybody who has contracted H5N1.

A death rate is only as good as statistics on the rate of infection. If you've got an inaccurate count of the number of people infected, your death rate is going to be wrong. And there's some evidence that might be the case with H5N1.

In a recent study of rural Thai villagers, sera from 800 individuals were collected and analyzed for antibodies against several avian influenza viruses, including H5N1, by hemagglutination-inhibition and neutralization assays. The results indicate that 73 participants (9.1%) had antibody titers against one of two different H5N1 strains. The authors conclude that ‘people in rural central Thailand may have experienced subclinical avian influenza virus infections’. A subclinical infection is one without apparent signs of illness.

If 9% of the rural Asian population has been subclinically infected with avian H5N1 influenza virus strains, it would dramatically change our view of the pathogenicity of the virus. Extensive serological studies must be done to determine the extent of human infection with avian H5N1 influenza viruses. Until we know how many individuals are infected with avian influenza H5N1, we must refrain from making dire conclusions about the pathogenicity of the virus.

Local snow does not disprove global climate change

Even with all those Snowpocalypseses(?), NASA says that 2011 was still the ninth warmest year on record since 1880—and all but one of the top 10 warmest years have happened in the last 11 years. 1998 is third-hottest, although it's worth noting that the top six years are all close enough that they may as well be fundamentally tied for first.

The best set of infographics ever

At Bloomberg Business Week, Vali Chandrasekaran makes me incredibly happy by creating a series of six infographics demonstrating the ridiculous connections you can make when you start confusing correlation and causation. Did a conspiracy of baby Avas cause the U.S. housing market to implode? Was Michele Bachmann's candidacy doomed by the end of Staten Island Cakes? Are scientists raising the global average temperature in order to increase their own research funding? Find out here!

Ryan Gosling's soulful eyes + smooth statistics

Biostatistics Ryan Gosling will look deeply into your eyes and ask about your p-value.

Via Jacquelyn Gill