A leading form of statistical malpractice in scientific studies is to retroactively comb through the data for "interesting" patterns; while such patterns may provide useful leads for future investigations, simply cherry-picking data that looks significant out of a study that has otherwise failed to prove out the researcher's initial hypothesis can generate false -- but plausible-seeming -- conclusions.
Read the rest
Fivethirtyeight studied every election since Trump -- 99 special elections plus regular state elections in NJ and Virginia -- and checked whether there were any strong predictors of whether Trump voters would support a Democratic candidate.
Read the rest
It's generally recognized that The Simpsons drifted from sharp comedy to cosy light entertainment as the years went by, and that a threshold was passed somewhere between seasons eight and eleven. Using data culled from IMDB and a contiguous cluster analysis, Nathan Cunn pinpoints the exact end of The Simpsons' golden age to the half-hour: episode 11 of season 10. This particular way of seeing things condemns no particular episode's sins, merely putting a statistical dividing point between Wild Barts Can't Be Broken and Sunday Cruddy Sunday. Compare to The Principal and the Pauper, the season 9 episode traditionally identified as the shark-jumper, which in this chart is a controversial blip on the road to all the disengaged meta to come.
Read the rest
It’s remarkable that the show managed to go for over nine seasons, and over 200 episodes, with an average rating of 8.2. The latter seasons, in contrast, have an average rating of 6.9, with only three episodes in the latter 400+ episodes achieving a rating higher than the average golden age episode—those episodes being Trilogy of Error, Holidays of Futured Passed, and Barthood. Given that the ratings approximately follow a Gaussian distribution, we expect (and, indeed, observe) that roughly half of the golden age episodes exceeded this mean value.
Although The Simpsons isn’t quite the show it once was, the decline in the show’s latter seasons is more testament to the impossibly high standards set by the earlier seasons than it is an indictment of what the show became.
FJ Anscome's classic, oft-cited 1973 paper "Graphs in Statistical Analysis" showed that very different datasets could produce "the same summary statistics (mean, standard deviation, and correlation) while producing vastly different plots" -- Anscome's point being that you can miss important differences if you just look at tables of data, and these leap out when you use graphs to represent the same data. Read the rest
If you've read Cathy O'Neil's Weapons of Math Destruction (you should, right NOW), then you know that machine learning can be a way to apply a deadly, nearly irrefutable veneer of objectivity to our worst, most biased practices. Read the rest
Andrew Hacker, a professor of both mathematics and political science at Queens University has a new book out, The Math Myth: And Other STEM Delusions, which makes the case that the inclusion of algebra and calculus in high school curriculum discourages students from learning mathematics, and displaces much more practical mathematical instruction about statistical and risk literacy, which he calls "Statistics for Citizenship." Read the rest
The gold standard for researching the effects of diet on health is the self-reported food-diary, which is prone to lots of error, underreporting of "bad" food, and changes in diet that result from simply keeping track of what you're eating. The standard tool for correcting these errors comparisons with more self-reported tests. Read the rest
David Cameron wants social media companies to invent a terrorism-detection algorithm and send all the "bad guys" it detects to the police -- but this will fall prey to the well-known (to statisticians) "paradox of the false positive," producing tens of thousands of false leads that will drown the cops. Read the rest
The eye-popping stat comes from Philip J Cook's 2007 booze-economics book Paying the Tab. Read the rest
I've been writing about Thomas Piketty's magisterial economics bestseller Capital in the Twenty First Century for some time now (previously), and have been taking a close interest in criticisms of his work, especially the Financial Times's critique of his methods and his long, detailed response. I freely admit that I lack the stats and economics background to make sense of this highly technical part of the debate, but I think that Wonkblog's analysis looks sound, as does the More or Less play-by-play (MP3). Read the rest
The Spurious Correlations engine helps you discover bizarre and delightful spurious correlations, and collects some of the most remarkable ones. For example, Per capita consumption of sour cream (US)
Motorcycle riders killed in noncollision transport accident at the astounding rate of 0.916391. Meanwhile, but exploring the engine, I've discovered a surprising correlation between the Age of Miss America and Murders by steam, hot vapours and hot objects (a whopping 0.870127!).
(via Waxy) Read the rest
Writing in the Financial Times, Tim Harford (The Undercover Economist Strikes Back, Adapt, etc) offers a nuanced, but ultimately damning critique of Big Data and its promises. Harford's point is that Big Data's premise is that sampling bias can be overcome by simply sampling everything, but the actual data-sets that make up Big Data are anything but comprehensive, and are even more prone to the statistical errors that haunt regular analytic science.
What's more, much of Big Data is "theory free" -- the correlation is observable and repeatable, so it is assumed to be real, even if you don't know why it exists -- but theory-free conclusions are brittle: "If you have no idea what is behind a correlation, you have no idea what might cause that correlation to break down." Harford builds on recent critiques of Google Flu (the poster child for Big Data) and goes further. This is your must-read for today. Read the rest
Charles writes, "It's hard to imagine how we would have gotten all of the whiz-bang technology we enjoy today without the discovery of probability and statistics. From vaccines to the Internet, we owe a lot to the probabilistic revolution, and every great revolution deserves a great story!
"The Fields Institute for Research in Mathematical Sciences has partnered up with the American Statistical Association in launching a speculative fiction competition that calls on writers to imagine a world where the Normal Curve had never been discovered. Stories will be following in the tradition of Gibson and Sterling's steampunk classic, The Difference Engine, in creating an imaginative alternate history that sparks the imagination. The winning story will receive a $2000 grand prize, with an additional $1500 in cash available for youth submissions."
What would the world be like if the Normal Curve had never been discovered? (Thanks, Charles!) Read the rest
The Pew Internet and American Life project has released a new report on reading, called E-Reading Rises as Device Ownership Jumps. It surveys American book-reading habits, looking at both print books and electronic books, as well as audiobooks. They report that ebook readership is increasing, and also produced a "snapshot" (above) showing readership breakdown by gender, race, and age. They show strong reading affinity among visible minorities and women, and a strong correlation between high incomes and readership. The most interesting number for me is that 76 percent of Americans read at least one book last year, which is much higher than I'd have guessed.
E-Reading Rises as Device Ownership Jumps
(via Jim Hines) Read the rest
Last May, Dave at Euri.ca took at crack at expanding Gabriel Rossman's excellent post on spurious correlation in data. It's an important read for anyone wondering whether the core hypothesis of the Big Data movement is that every sufficiently large pile of horseshit must have a pony in it somewhere. As O'Reilly's Nat Torkington says, "Anyone who thinks it’s possible to draw truthful conclusions from data analysis without really learning statistics needs to read this." Read the rest
These two young fellows are brothers from Palo Alto who've set out to produce a series of videos explaining the technical ideas in my novel Little Brother, and their first installment, explaining Bayes's Theorem, is a very promising start. I'm honored -- and delighted!
Technology behind "Little Brother" - Jamming with Bayes Rule
Read the rest
Alex Reinhart's Statistics Done Wrong: The woefully complete guide is an important reference guide, right up there with classics like How to Lie With Statistics. The author has kindly published the whole text free online under a CC-BY license, with an index. It's intended for people with no stats background and is extremely readable and well-presented. The author says he's working on a new edition with new material on statistical modelling. Read the rest