Analysis of all the elections since Trump produces no clear answers on the class and suburban/urban correlates of flippability

Fivethirtyeight studied every election since Trump -- 99 special elections plus regular state elections in NJ and Virginia -- and checked whether there were any strong predictors of whether Trump voters would support a Democratic candidate. Read the rest

Cluster analysis shows that the golden age of The Simpsons ended in Season 10

It's generally recognized that The Simpsons drifted from sharp comedy to cosy light entertainment as the years went by, and that a threshold was passed somewhere between seasons eight and eleven. Using data culled from IMDB and a contiguous cluster analysis, Nathan Cunn pinpoints the exact end of The Simpsons' golden age to the half-hour: episode 11 of season 10. This particular way of seeing things condemns no particular episode's sins, merely putting a statistical dividing point between Wild Barts Can't Be Broken and Sunday Cruddy Sunday. Compare to The Principal and the Pauper, the season 9 episode traditionally identified as the shark-jumper, which in this chart is a controversial blip on the road to all the disengaged meta to come.

It’s remarkable that the show managed to go for over nine seasons, and over 200 episodes, with an average rating of 8.2. The latter seasons, in contrast, have an average rating of 6.9, with only three episodes in the latter 400+ episodes achieving a rating higher than the average golden age episode—those episodes being Trilogy of Error, Holidays of Futured Passed, and Barthood. Given that the ratings approximately follow a Gaussian distribution, we expect (and, indeed, observe) that roughly half of the golden age episodes exceeded this mean value.

Although The Simpsons isn’t quite the show it once was, the decline in the show’s latter seasons is more testament to the impossibly high standards set by the earlier seasons than it is an indictment of what the show became.

Read the rest

Automatically generate datasets that teach people how (not) to create statistical mirages

FJ Anscome's classic, oft-cited 1973 paper "Graphs in Statistical Analysis" showed that very different datasets could produce "the same summary statistics (mean, standard deviation, and correlation) while producing vastly different plots" -- Anscome's point being that you can miss important differences if you just look at tables of data, and these leap out when you use graphs to represent the same data. Read the rest

Sampling bias: how a machine-learning beauty contest awarded nearly all prizes to whites

If you've read Cathy O'Neil's Weapons of Math Destruction (you should, right NOW), then you know that machine learning can be a way to apply a deadly, nearly irrefutable veneer of objectivity to our worst, most biased practices. Read the rest

Proposal: replace Algebra II and Calculus with "Statistics for Citizenship"

Andrew Hacker, a professor of both mathematics and political science at Queens University has a new book out, The Math Myth: And Other STEM Delusions, which makes the case that the inclusion of algebra and calculus in high school curriculum discourages students from learning mathematics, and displaces much more practical mathematical instruction about statistical and risk literacy, which he calls "Statistics for Citizenship." Read the rest

Why all scientific diet research turns out to be bullshit

The gold standard for researching the effects of diet on health is the self-reported food-diary, which is prone to lots of error, underreporting of "bad" food, and changes in diet that result from simply keeping track of what you're eating. The standard tool for correcting these errors comparisons with more self-reported tests. Read the rest

Stats-based response to UK Tories' call for social media terrorism policing

David Cameron wants social media companies to invent a terrorism-detection algorithm and send all the "bad guys" it detects to the police -- but this will fall prey to the well-known (to statisticians) "paradox of the false positive," producing tens of thousands of false leads that will drown the cops. Read the rest

10% of Americans have 10 or more alcoholic drinks every day

The eye-popping stat comes from Philip J Cook's 2007 booze-economics book Paying the Tab. Read the rest

Piketty's methods: parsing wealth inequality data and its critique

I've been writing about Thomas Piketty's magisterial economics bestseller Capital in the Twenty First Century for some time now (previously), and have been taking a close interest in criticisms of his work, especially the Financial Times's critique of his methods and his long, detailed response. I freely admit that I lack the stats and economics background to make sense of this highly technical part of the debate, but I think that Wonkblog's analysis looks sound, as does the More or Less play-by-play (MP3). Read the rest

Spurious correlations: an engine for head-scratching coincidences

The Spurious Correlations engine helps you discover bizarre and delightful spurious correlations, and collects some of the most remarkable ones. For example, Per capita consumption of sour cream (US) correlates with Motorcycle riders killed in noncollision transport accident at the astounding rate of 0.916391. Meanwhile, but exploring the engine, I've discovered a surprising correlation between the Age of Miss America and Murders by steam, hot vapours and hot objects (a whopping 0.870127!).

Spurious Correlations

(via Waxy) Read the rest

Big Data has big problems

Writing in the Financial Times, Tim Harford (The Undercover Economist Strikes Back, Adapt, etc) offers a nuanced, but ultimately damning critique of Big Data and its promises. Harford's point is that Big Data's premise is that sampling bias can be overcome by simply sampling everything, but the actual data-sets that make up Big Data are anything but comprehensive, and are even more prone to the statistical errors that haunt regular analytic science.

What's more, much of Big Data is "theory free" -- the correlation is observable and repeatable, so it is assumed to be real, even if you don't know why it exists -- but theory-free conclusions are brittle: "If you have no idea what is behind a correlation, you have no idea what might cause that correlation to break down." Harford builds on recent critiques of Google Flu (the poster child for Big Data) and goes further. This is your must-read for today. Read the rest

SF writing competition: a world without the Normal Curve!

Charles writes, "It's hard to imagine how we would have gotten all of the whiz-bang technology we enjoy today without the discovery of probability and statistics. From vaccines to the Internet, we owe a lot to the probabilistic revolution, and every great revolution deserves a great story!

"The Fields Institute for Research in Mathematical Sciences has partnered up with the American Statistical Association in launching a speculative fiction competition that calls on writers to imagine a world where the Normal Curve had never been discovered. Stories will be following in the tradition of Gibson and Sterling's steampunk classic, The Difference Engine, in creating an imaginative alternate history that sparks the imagination. The winning story will receive a $2000 grand prize, with an additional $1500 in cash available for youth submissions."

What would the world be like if the Normal Curve had never been discovered? (Thanks, Charles!) Read the rest

Who reads books in America, and how?

The Pew Internet and American Life project has released a new report on reading, called E-Reading Rises as Device Ownership Jumps. It surveys American book-reading habits, looking at both print books and electronic books, as well as audiobooks. They report that ebook readership is increasing, and also produced a "snapshot" (above) showing readership breakdown by gender, race, and age. They show strong reading affinity among visible minorities and women, and a strong correlation between high incomes and readership. The most interesting number for me is that 76 percent of Americans read at least one book last year, which is much higher than I'd have guessed.

E-Reading Rises as Device Ownership Jumps

(via Jim Hines) Read the rest

Understanding spurious correlation in data-mining

Last May, Dave at Euri.ca took at crack at expanding Gabriel Rossman's excellent post on spurious correlation in data. It's an important read for anyone wondering whether the core hypothesis of the Big Data movement is that every sufficiently large pile of horseshit must have a pony in it somewhere. As O'Reilly's Nat Torkington says, "Anyone who thinks it’s possible to draw truthful conclusions from data analysis without really learning statistics needs to read this." Read the rest

Young brothers explain Bayes's theorem

These two young fellows are brothers from Palo Alto who've set out to produce a series of videos explaining the technical ideas in my novel Little Brother, and their first installment, explaining Bayes's Theorem, is a very promising start. I'm honored -- and delighted!

Technology behind "Little Brother" - Jamming with Bayes Rule Read the rest

Statistics Done Wrong: a guide to spotting and avoiding stats errors

Alex Reinhart's Statistics Done Wrong: The woefully complete guide is an important reference guide, right up there with classics like How to Lie With Statistics. The author has kindly published the whole text free online under a CC-BY license, with an index. It's intended for people with no stats background and is extremely readable and well-presented. The author says he's working on a new edition with new material on statistical modelling. Read the rest

How long should we expect Google Keep to last?

On the Guardian, Charles Arthur has totted up the lifespan of 39 products and services that Google has killed off in the past due to insufficient public interest. One interesting finding is that Google is becoming less patient with its less popular progeny, with an accelerating trend to killing off products that aren't cutting it. This was occasioned by the launch of Google Keep, a networked note-taking app which has the potential to become quite central to your workflow, and to be quite disruptive if Google kills it -- much like Google Reader, which is scheduled for impending switch-off.

So if you want to know when Google Keep, opened for business on 21 March 2013, will probably shut - again, assuming Google decides it's just not working - then, the mean suggests the answer is: 18 March 2017. That's about long enough for you to cram lots of information that you might rely on into it; and also long enough for Google to discover that, well, people aren't using it to the extent that it hoped. Much the same as happened with Knol (lifespan: 1,377 days, from 23 July 2008 to 30 April 2012), or Wave (1,095 days, from May 2009 - 30 April 2012) or of course Reader (2,824 days, from 7 October 2005 to 1 July 2013).

If you want to play around further with the numbers, then if we assume that closures occur randomly as a normal distribution around the mean, and that Google is going to shut Google Keep, then there's a 68% chance that the closure will occur between April 2015 and February 2019.

Read the rest

More posts