Pancake pioneer Saipancakes has combined a spirograph with a pancake-batter dispenser -- the Pangraph -- and it makes gorgeous fairy-pancakes with many nested symmetries.
]]>

You will need a knife, a non-toxic marker, and some math.

]]>You will need a knife, a non-toxic marker, and some math.

]]>As with the previous installment, the new one looks at the different sets of numbers to show that some infinities are larger than others, but this one shows that the rational numbers are such a tiny infinity that they're a statistical anomaly and virtually impossible to find!

]]>Brilliant, high-speed math vlogger Vi Hart has revisited the topic of the sizes of infinities.

As with the previous installment, the new one looks at the different sets of numbers to show that some infinities are larger than others, but this one shows that the rational numbers are such a tiny infinity that they're a statistical anomaly and virtually impossible to find!

Transcendental Darts
]]>

I enjoyed learning about statistics, probability, zero, infinity, number sequences, and more in this heavily illustrated kids’ book called How to Be a Math Genius, by Mike Goldsmith.

]]>I enjoyed learning about statistics, probability, zero, infinity, number sequences, and more in this heavily illustrated kids’ book called How to Be a Math Genius, by Mike Goldsmith. But would my 11-year daughter like it as much? I handed it to her after school and she become absorbed in it until called for dinner. She took it to the dinner table and read it while we ate. The next day, she asked for the book so she could finish it. Loaded with fun exercises (like cutting a hole through a sheet of paper so you can walk through it), *How to Be a Math Genius* will show kids (and adults) that math is often complicated, but doesn’t need to be boring. (This book is part of DK Children’s How to Be a Genius series. See my review of How to Be a Genius.)

See sample interior pages at Wink.]]>

Just in time for you to get the most out of "The Fault in Our Stars," the incomparable, fast-talking mathblogger Vi Hart's latest video is a sparkling-clear explanation of one of my favorite math-ideas: the relative size of different infinities. If that's not enough for you, have a listen to this episode of the Math for Primates podcast.

Proof some infinities are bigger than other infinities
]]>

Cellular automata are curious and fascinating computer models programmed with simple rules that generate complex patterns that cause us to consider whether the universe is a computer and life an algorithm. Over at Science News, Tom Siegfried has the first of a two-part series on cellular automata:

Traditionally, the math used for computing physical laws, like Newton’s laws of motion, use calculus, designed for tasks like quantifying change by infinitesimal amounts over infinitesimal increments of time. Modern computers can help do the calculating, but they don’t work the way nature supposedly does. Today’s computers are digital. They process bits and bytes, discrete units of information, not the continuous variables typically involved in calculus."If the world is a computer, life is an algorithm"]]>From time to time in recent decades, scientists have explored the notion that the universe is also digital. Nobel laureate Gerard ’t Hooft, for instance, thinks that some sort of information processing on a submicroscopic level is responsible for the quantum features that describe detectable reality. He calls this version of quantum physics the cellular automaton interpretation.

[Video Link]There are roughly 80,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 unique ways to order 52 playing cards. “Any time you pick up a well shuffled deck, you are almost certainly holding an arrangement of cards that has never before existed and might not exist again.” *(Via Adafruit Industries)*]]>

You've probably seen this image making the rounds on social media. It shows a method of doing basic subtraction that's intended to appear wildly nonsensical and much harder to follow than the "Old Fashion" [sic] way of just putting the 12 under the 32 and coming up with an answer.

]]>You've probably seen this image making the rounds on social media. It shows a method of doing basic subtraction that's intended to appear wildly nonsensical and much harder to follow than the "Old Fashion" [sic] way of just putting the 12 under the 32 and coming up with an answer. This method of teaching is often attributed to Common Core, a set of educational standards recently rolled out in the US.

But, explains math teacher and skeptic blogger Hemant Mehta, this image actually makes a lot more sense than it may seem to on first glance. In fact, for one thing, this method of teaching math isn't really new (our producer Jason Weisberger remembers learning it in high school). It's also not much different from the math you learned back when you were learning how to count change. It's meant to help kids be able to do math in their heads, without borrowing or scratch-paper notations or counting on fingers. What's more, he says, it has absolutely nothing to do with Common Core, which doesn't specify *how *subjects have to be taught.

]]>I admit it’s totally confusing but here’s what it’s saying:

If you want to subtract 12 from 32, there’s a better way to think about it. Forget the algorithm. Instead, count up from 12 to an “easier” number like 15. (You’ve gone up 3.) Then, go up to 20. (You’ve gone up another 5.) Then jump to 30. (Another 10). Then, finally, to 32. (Another 2.)

I know. That’s still ridiculous. Well, consider this: Suppose you buy coffee and it costs $4.30 but all you have is a $20 bill. How much change should the barista give you back? (Assume for a second the register is broken.)

You sure as hell aren’t going to get out a sheet of paper ...

Charles writes, "It's hard to imagine how we would have gotten all of the whiz-bang technology we enjoy today without the discovery of probability and statistics. From vaccines to the Internet, we owe a lot to the probabilistic revolution, and every great revolution deserves a great story!

"The Fields Institute for Research in Mathematical Sciences has partnered up with the American Statistical Association in launching a speculative fiction competition that calls on writers to imagine a world where the Normal Curve had never been discovered. Stories will be following in the tradition of Gibson and Sterling's steampunk classic, The Difference Engine, in creating an imaginative alternate history that sparks the imagination. The winning story will receive a $2000 grand prize, with an additional $1500 in cash available for youth submissions."

What would the world be like if the Normal Curve had never been discovered? (*Thanks, Charles!*)
]]>

Carlo Séquin is a computer science professor and sculptor at UC Berkeley who explores the art of math, and the math of art.

]]>

Carlo Séquin is a computer science professor and sculptor at UC Berkeley who explores the art of math, and the math of art. He lives in a world of impossible objects and mind-bending shapes. Séquin’s research has contributed to the pervasiveness of digital cameras and to a revolution in computer chip design. He has developed groundbreaking computer-aided design (CAD) tools for circuit designers, mechanical engineers, and architects. Meanwhile, his huge abstract sculptures have been exhibited around the world. Visiting the computer science professor emeritus’s office is like taking a trip down the rabbit hole. Paradoxical forms are found in every corner, piled on shelves, poised on pedestals, hanging from the ceiling—optical illusions embodied in paper, cardboard, plastic, and metal.

I wrote about Séquin for the new issue of California magazine and you can read it here: Sculpting Geometry]]>

In the end of year episode (MP3) of the BBC's More or Less stats podcast, Tim Harford talks to a variety of interesting people about their "number of the year," with fascinating results.

]]>In the end of year episode (MP3) of the BBC's More or Less stats podcast, Tim Harford talks to a variety of interesting people about their "number of the year," with fascinating results.

But the crowning glory of the episode is Helen Arney's magnificent musical tribute to Mersenne 48, the largest Mersenne Prime ever calculated, which came to light in 2013. (Arney herself is going out on tour of the UK, for the delightfully named Full Frontal Nerdity tour)

]]>

Math With Bad Drawing's "Headlines from a Mathematically Literate World" is a rather good -- and awfully funny -- compendium of comparisons between attention-grabbing, math-abusing headlines, and their math-literate equivalents.

Our World: After Switch in Standardized Tests, Scores Drop

Mathematically Literate World: After Switch in Standardized Tests, Scores No Longer Directly ComparableOur World: Proposal Would Tax $250,000-Earners at 40%

Mathematically Literate World: Proposal Would Tax $250,000-Earners’ Very Last Dollar, and That Dollar Alone, at 40%Our World: Still No Scientific Consensus on Global Warming

Mathematically Literate World: Still 90% Scientific Consensus on Global WarmingOur World: Hollywood Breaks Box Office Records with Explosions, Rising Stars

Mathematically Literate World: Hollywood Breaks Box Office Records with Inflation, Rising PopulationOur World: Illegal Downloaders Would Have Spent $300 Million to Obtain Same Music Legally

Mathematically Literate World: Illegal Downloaders Would Never Have Bothered to Obtain Same Music Legally

Headlines from a Mathematically Literate World
]]>

The incomparable, incredible, mathematically gifted Vi Hart continues to make the world a better place for numbers and the people who love them, with a video explaining logarithms. Watch this one today (here's the torrent link).
]]>

Shardcore writes, "The Tate recently released a 'big data' set of the 70k artworks in their collection. I've been playing with it and finding all sorts of fun to be had. The latest experiment uses the Tate data as a springboard to algorithmically imagine new artworks - 88,577,208,667,721,179,117,706,090,119,168 to be precise."

(that's eighty-eight nonillion, five hundred seventy-seven octillion, two hundred eight septillion, six hundred sixty-seven sextillion, seven hundred twenty-one quintillion, one hundred seventy-nine quadrillion, one hundred seventeen trillion, seven hundred six billion, ninety million, one hundred nineteen thousand, one hundred sixty-eight possible artworks...)We can imagine machines which spot the items within a representational work (look at Google Goggles, for example) but algorithms which spot the ‘emotions and human qualities’ of an artwork are more difficult to comprehend. These categories capture complex, uniquely human judgements which occupy a space which we hold outside of simple visual perception. In fact I think I’d find a machine which could accurately classify an artwork in this way a little sinister…

The relationships between these categories and the works are metaphorical in nature, allusions to whole classes of human experience that cannot be derived from simply ‘looking at’ the artwork. The exciting part of the Tate data is really the ‘humanity’ it contains, something absolutely essential when we’re talking about art – after all, culture cannot exist without culturally informed entities experiencing it.

It struck me that these are not only representations of existing artworks, but actually the vocabulary and structure required to describe new, as yet un-made, artworks.

Machine Imagined Artworks (2013)
(*Thanks, Shardcore!*)
]]>

Last May, Dave at Euri.ca took at crack at expanding Gabriel Rossman's excellent post on spurious correlation in data. It's an important read for anyone wondering whether the core hypothesis of the Big Data movement is that every sufficiently large pile of horseshit must have a pony in it somewhere. As O'Reilly's Nat Torkington says, "Anyone who thinks it’s possible to draw truthful conclusions from data analysis without really learning statistics needs to read this."

* If good looks and smarts are distributed normally, and

* If good looks and smarts have nothing to do with each other, and

* If movie producers want both smarts and looks

* Then, by observing employed actors we’ll assume that looks and smarts have a negative correlation

* Even though we constructed this experiment with no correlation

Here’s a graph of 250 randomly generated points (with no correlation). With the red circles representing “actors who are smart and good looking enough to get a job (looks+smarts>2), and lighter blue x’s representing “people who wanted to be actors”

Clearly if we only look at actors with jobs, we’ll see a clearly negative correlation between smarts and good looks. In fact, some brilliant actors are less attractive than an average person, and some gorgeous actors are dumber than an average person. Even more interesting though, is that if we try to rule out bias by looking at aspiring but unsuccessful actors as well, we’ll find that they exhibit a similar correlation...

You’re probably polluting your statistics more than you think
(*via O'Reilly Radar*)
]]>

Technology behind "Little Brother" - Jamming with Bayes Rule

]]>These two young fellows are brothers from Palo Alto who've set out to produce a series of videos explaining the technical ideas in my novel Little Brother, and their first installment, explaining Bayes's Theorem, is a very promising start. I'm honored -- and delighted!

Technology behind "Little Brother" - Jamming with Bayes Rule
]]>

Alex Reinhart's Statistics Done Wrong: The woefully complete guide is an important reference guide, right up there with classics like How to Lie With Statistics. The author has kindly published the whole text free online under a CC-BY license, with an index. It's intended for people with no stats background and is extremely readable and well-presented. The author says he's working on a new edition with new material on statistical modelling.

Surveys of statistically significant results reported in medical and psychological trials suggest that many p values are wrong, and some statistically insignificant results are actually significant when computed correctly.25, 2 Other reviews find examples of misclassified data, erroneous duplication of data, inclusion of the wrong dataset entirely, and other mixups, all concealed by papers which did not describe their analysis in enough detail for the errors to be easily noticed.1, 26

Sunshine is the best disinfectant, and many scientists have called for experimental data to be made available through the Internet. In some fields, this is now commonplace: there exist gene sequencing databases, protein structure databanks, astronomical observation databases, and earth observation collections containing the contributions of thousands of scientists. Many other fields, however, can’t share their data due to impracticality (particle physics data can include many terabytes of information), privacy issues (in medical trials), a lack of funding or technological support, or just a desire to keep proprietary control of the data and all the discoveries which result from it. And even if the data were all available, would anyone analyze it all to spot errors?

Similarly, scientists in some fields have pushed towards making their statistical analyses available through clever technological tools. A tool called Sweave, for instance, makes it easy to embed statistical analyses performed using the popular R programming language inside papers written in LaTeX, the standard for scientific and mathematical publications. The result looks just like any scientific paper, but another scientist reading the paper and curious about its methods can download the source code, which shows exactly how all the numbers were calculated. But would scientists avail themselves of the opportunity? Nobody gets scientific glory by checking code for typos.

Statistics Done Wrong: The woefully complete guide

(*Image: XKCD*)]]>

In the past few weeks I have been analyzing data from a research project. The topic is not important for our discussion here, the methodology, however, is. The approach I am using is called a gain score analysis. Participants are assigned to one of two groups, each group will receive a different intervention. For each group we measured our outcome variable at baseline, that is before treatment. After the intervention we will measure our outcome variable again. Gain score is defined as the final measurement minus the baseline measurement. In other word the magnitude of the change. By focusing on the magnitude of the change we don’t have to worry about the fact that the baseline scores were not identical. We use a statistical test to see if one group gained significantly more that the other.

A value added measure of teaching is also a gain score analysis. They measure the students’ performance at the beginning of the year and then measure their performance again at years end. The difference would be the gain score or, as it is called in education, the value added. The average gain score for a group of students is said to be the value added by the teacher.

What is wrong with this approach? After all it seems to be identical to what my colleagues and I are doing in our research. Unfortunately, there is a crucial difference. In my study the participants were randomly assigned to the two groups. A gain score analysis can not be valid if the group assignments are not random.

Here's what my Dad added:

I agree with Jerry Genovese. There are several methodological problems with value-added evaluations of teachers, as I understand the concept from Jerry's blog. First, the issue of comparisons: he's right that sampling has to be random. Not only that, the sample size has to be sufficiently large (sufficient power) and representative. To be representative, the proportions of certain demographically defined groups of students have to be proportionally represented in the comparison groups. Besides that, there is the issue of what constitutes an appropriate measure of value. In the case of student scores, we need to know whether the tests of student performance are good predictors of future success. In Finland, the students are not exposed to such tests until later on when they compete in the PISA, which is an international test of performance by country. Yet despite, or perhaps because of, this lack of emphasis, they greatly outperform American kids. The value that Finns use to compare teachers is based on rigorous standards of pre-service education, including attainment of a Master's degree, and very competitive salaries. These teachers are expected to be knowledgeable and innovative. In the U.S., the teachers are expected to get their students to attain scores on standardized tests in a high stakes environment, which inevitably leads to cheating and sacrifice of creative learning opportunities.

Finally, in order to do a proper comparison of teacher performance, you have to eliminate (control for) variations in the student populations being served. Students learn at different rates, are subject to cultural influences, have varying degrees of home encouragement and support, and the list goes on. There can be no meaningful comparison among teachers who have vastly different student populations because a significant variable plays a confounding role.

Value added measures of teachers are invalid
(*Thanks, Jeremy!*)
]]>

If you're the type of person who really needs some good visuals to make a concept stick in your head, this series of YouTube videos made by the British Psychological Society Media Centre will help you remember the meanings behind statistical concepts like "correlation", "frequency distributions", and "sampling error". There are four videos in the series so far, and they do a great job of painting pictures around abstract ideas. Bonus: Soothing music.

Via Openculture

]]>This isn't the same as being able to decrypt all of Tor in realtime, but it does suggest that the NSA could selectively decrypt its stored archives of Tor traffic.

However, the new version of Tor, 2.4, uses elliptical curve Diffie-Hellman ciphers, which are probably beyond the NSA's reach.

Graham faults the Tor Project for the poor uptake of its new version, though as an Ars Technica commenter points out, popular GNU/Linux distributions like Debian and its derivative Ubuntu are also to blame, since they only distribute the older, weaker version. In either event, this is a wake-up call that will likely spur both the Tor Project and the major distros to push the update.

Yesterday's revelations about the NSA's ability to decrypt 'secure' communications were taken by many to mean that the NSA had made fundamental mathematical or computing breakthroughs that allowed it to decrypt securely enciphered messages. But it's pretty clear that's *not* what's going on.

Mostly, the NSA has spent $250,000,000 per year on a program of sabotage, through which they have inveigled proprietary hardware and software companies, as well as standards bodies, into deliberately introducing back-doors into their technology. This is much more frightening than the idea that the NSA has made profound mathematical breakthroughs -- such breakthroughs might stay within the NSA's walls for years or decades. But a program of systematic sabotage against common crypto tools means that anyone of sufficient skill and attentiveness is likely to discover and exploit those same back-doors -- that means that organized crime, totalitarian states, and other entities even less savory than the NSA should now be assumed to have full access to the financial system, government databases, and other sensitive systems.

But the good news is that, as the ProPublica article mentioned (quoting whistleblower Edward Snowden), "Properly implemented strong crypto systems are one of the few things that you can rely on." That means that free/open source security tools like Tor (which can be publicly inspected for sabotage) can indeed be trusted, where they use state-of-the-art crypto, and implement it well.

It's not surprising to learn that 1024 RSA/DH can be broken by spending huge sums on brute-force computation -- that was already public knowledge prior to yesterday's revelations. But crypto is asymmetrical: it is much, much easier to make crypto stronger than it is to break crypto through brute force. Merely by switching to 1025-bit RSA/DH keys, the Tor Project could double its security. Switching to 1030-bit RSA/DH keys increases the difficulty 64-fold. And by switching to more secure ciphers like elliptical curve Diffie-Hellman, Tor becomes vastly more secure still.
]]>

You may have heard speculation that the NSA has secretly broken the strong cryptographic systems used to keep data secret -- after all, why collect all that scrambled data if they can't unscramble it? But Bruce Schneier argues (convincingly) that this is so impossible as to be fanciful. So why have they done this? My guess is that they're counting on flaws being revealed in the cryptographic implementations in the field (or maybe they've discovered such flaws and are keeping them secret). Or they're hoping for a big breakthrough in the future (quantum computing, anyone?).

Right now the upper practical limit on brute force is somewhere under 80 bits. However, using that as a guide gives us some indication as to how good an attack has to be to break any of the modern algorithms. These days, encryption algorithms have, at a minimum, 128-bit keys. That means any NSA cryptoanalytic breakthrough has to reduce the effective key length by at least 48 bits in order to be practical.

There’s more, though. That DES attack requires an impractical 70 terabytes of known plaintext encrypted with the key we’re trying to break. Other mathematical attacks require similar amounts of data. In order to be effective in decrypting actual operational traffic, the NSA needs an attack that can be executed with the known plaintext in a common MS-Word header: much, much less.

So while the NSA certainly has symmetric cryptanalysis capabilities that we in the academic world do not, converting that into practical attacks on the sorts of data it is likely to encounter seems so impossible as to be fanciful.

How Advanced Is the NSA’s Cryptanalysis — And Can We Resist It?

(*Image: A Stick Figure Guide to the Advanced Encryption Standard (AES) *)
]]>

About 200 million people go to U.S.

]]>About 200 million people go to U.S. beaches each year. About 36 of those hundreds of millions are attacked by sharks. Most of them survive. In contrast, more than 30,000 of those millions of beach-goers are to be rescued from surfing accidents. And many of those humans each year die, or must be rescued, from drowning incidents in which no other creature is to blame.

So, will we see Human Week, or Human-nado mockumentaries any time soon?

[Oceana.org]]]>

Flavio Garcia, a security researcher from the University of Birmingham has been ordered not to deliver an important paper at the Usenix Security conference by an English court. Garcia, along with colleagues from a Dutch university, had authored a paper showing the security failings of the keyless entry systems used by a variety of luxury cars. Volkswagon asked an English court for an injunction censoring his work -- which demonstrated their incompetence and the risk they'd exposed their customers to -- and Mr Justice Birss agreed.

Garcia and his colleagues from the Stichting Katholieke Universiteit, Baris Ege and Roel Verdult, said they were "responsible, legitimate academics doing responsible, legitimate academic work" and their aim was to improve security for everyone, not to give criminals a helping hand at hacking into high-end cars that can cost their owners £250,000.

They argued that "the public have a right to see weaknesses in security on which they rely exposed". Otherwise, the "industry and criminals know security is weak but the public do not".

It emerged in court that their complex mathematical investigation examined the software behind the code. It has been available on the internet since 2009.

The scientists said it had probably used a technique called "chip slicing" which involves analysing a chip under a microscope and taking it to pieces and inferring the algorithm from the arrangement of the microscopic transistors on the chip itself – a process that costs around £50,000. The judgment was handed down three weeks ago without attracting any publicity, but has now become part of a wider discussion about car manufacturers' responsibilities relating to car security.

Scientist banned from revealing codes used to start luxury cars [Lisa O'Carroll/The Guardian]

(*Image: The Fragile, a Creative Commons Attribution Share-Alike (2.0) image from meetthewretched's photostream*)
]]>

We've featured doodling, fast-talking YouTube mathematician Vi Hart a lot here, but her latest, a 30-minute extended mix, is absolutely remarkable, even by her high standards. For 30 glorious minutes, Ms Hart explores the nature of randomness and pattern, using Stravinsky's 12-tone music as a starting-point and rocketing through constellations, the nature of reality, Borges's library, and more. On the way, she ends up with a good working definition of creativity, and explores the dilemma of structure versus creation. Brava, Ms Hart, you have outdone yourself! Plus, I like your copyright jokes.

Twelve Tones
]]>

Want to play a game of Tic-Tac-Toe that's genuinely challenging and hard? Try "Ultimate Tic-Tac-Toe," in which each square is made up of another, smaller Tic-Tac-Toe board, and to win the square you have to win its mini-game. Ben Orlin says he discovered the game on a mathematicians' picnic, and he explains a wrinkle on the rules:

You don’t get to pick which of the nine boards to play on. That’s determined by your opponent’s previous move. Whichever square he picks, that’s the board you must play in next. (And whichever square you pick will determine which board he plays on next.)...

This lends the game a strategic element. You can’t just focus on the little board. You’ve got to consider where your move will send your opponent, and where his next move will send you, and so on.

The resulting scenarios look bizarre. Players seem to move randomly, missing easy two- and three-in-a-rows. But there’s a method to the madness – they’re thinking ahead to future moves, wary of setting up their opponent on prime real estate. It is, in short, vastly more interesting than regular tic-tac-toe.

Ultimate Tic-Tac-Toe
(*via Kottke*)
]]>