BBC Radio 4's great math and science show "The Infinite Monkey Cage" did a great (and very funny) episode on crypto and Bletchley Park, with Robin Ince, Brian Cox, Dave Gorman, Simon Singh and Dr Sue Black.
Jeff Moser has a clear, fascinating enumeration of all the incredible math stuff that happens between a server and your browser when you click on an HTTPS link and open a secure connection to a remote end. It's one of the most important (and least understood) parts of the technical functioning of the Internet.
People sometimes wonder if math has any relevance to programming. Certificates give a very practical example of applied math. Amazon's certificate tells us that we should use the RSA algorithm to check the signature. RSA was created in the 1970's by MIT professors Ron *R*ivest, Adi *S*hamir, and Len *A*dleman who found a clever way to combine ideas spanning 2000 years of math development to come up with a beautifully simple algorithm:
You pick two huge prime numbers "p" and "q." Multiply them to get "n = p*q." Next, you pick a small public exponent "e" which is the "encryption exponent" and a specially crafted inverse of "e" called "d" as the "decryption exponent." You then make "n" and "e" public and keep "d" as secret as you possibly can and then throw away "p" and "q" (or keep them as secret as "d"). It's really important to remember that "e" and "d" are inverses of each other.
Now, if you have some message, you just need to interpret its bytes as a number "M." If you want to "encrypt" a message to create a "ciphertext", you'd calculate:
C ≡ Me (mod n)
This means that you multiply "M" by itself "e" times. The "mod n" means that we only take the remainder (e.g. "modulus") when dividing by "n." For example, 11 AM + 3 hours ≡ 2 (PM) (mod 12 hours). The recipient knows "d" which allows them to invert the message to recover the original message:
Cd ≡ (Me)d ≡ Me*d ≡ M1 ≡ M (mod n)
Robert Ghrist from University of Pennsylvania wrote in to tell us about his new, free Coursera course in single-variable Calculus, which starts on Jan 7. Calculus is one of those amazing, chewy, challenging branches of math, and Ghrist's hand-drawn teaching materials look really engaging.
Calculus is one of the grandest achievements of human thought, explaining everything from planetary orbits to the optimal size of a city to the periodicity of a heartbeat. This brisk course covers the core ideas of single-variable Calculus with emphases on conceptual understanding and applications. The course is ideal for students beginning in the engineering, physical, and social sciences. Distinguishing features of the course include:Calculus: Single Variable (Thanks, Robert!)
the introduction and use of Taylor series and approximations from the beginning;
* a novel synthesis of discrete and continuous forms of Calculus;
* an emphasis on the conceptual over the computational; and
* a clear, entertaining, unified approach.
The good folks on the most-excellent BBC Radio/Open University statistical literacy programme More or Less decided to answer a year-old Reddit argument about how many Lego bricks can be vertically stacked before the bottom one collapses.
They got the OU's Dr Ian Johnston to stress-test a 2X2 Lego in a hydraulic testing machine, increasing the pressure to some 4,000 Newtons, at which point the brick basically melted. Based on this, they calculated the maximum weight a 2X2 brick could bear, and thus the maximum height of a Lego tower:
The average maximum force the bricks can stand is 4,240N. That's equivalent to a mass of 432kg (950lbs). If you divide that by the mass of a single brick, which is 1.152g, then you get the grand total of bricks a single piece of Lego could support: 375,000.
So, 375,000 bricks towering 3.5km (2.17 miles) high is what it would take to break a Lego brick.
"That's taller than the highest mountain in Spain. It's significantly higher than Mount Olympus [tallest mountain in Greece], and it's the typical height at which people ski in the Alps," Ian Johnston says.
"So if the Greek gods wanted to build a new temple on Mount Olympus, and Mount Olympus wasn't available, they could just - but no more - do it with Lego bricks. As long as they don't jump up and down too much."
At his Psychology Today blog, Michael Chorost delves into a question about exoplanets that I've not really thought much about before — how easy they would be to leave.
Many of the potentially habitable exoplanets that we've found — the ones we call "Earth-like" — are actually a lot bigger than Earth. That fact has an effect — both on how actually habitable those planets would be for us humans and how easily any native civilizations that developed could slip the surly bonds of gravity and make it to outer space.
The good news, says Chorost is that the change in surface gravity wouldn't be as large as you might guess, even for planets much bigger than Earth. The bad news: Even a relatively small increase in surface gravity can mean a big increase in how fast a rocket would have to be going in order to leave the planet. It starts with one equation — SG=M/R^2.
Let’s try it with [exoplanet] HD 40307g, using data from the Habitable Exoplanet Catalog. Mass, 8.2 Earths. Radius, 2.4 times that of Earth. That gets you a surface gravity of 1.42 times Earth.
... it’s amazingly easy to imagine a super-Earth with a comfortable gravity. If a planet had eight Earth masses and 2.83 times the radius, its surface gravity would be exactly 1g. This is the “Fictional Planet” at the bottom of the table. Fictional Planet would be huge by Earth standards, with a circumference of 70,400 miles and an area eight times larger.
Does that mean we could land and take off with exactly the same technology we use here, assuming the atmosphere is similar? Actually, no. Another blogger, who who goes by the moniker SpaceColonizer, pointed out that Fictional Planet has a higher escape velocity than Earth. Put simply, escape velocity is how fast you have to go away from a planet to ensure that gravity can never bring you back. For Earth, escape velocity is about 25,000 miles per hour. Fictional Planet has an escape velocity 68% higher. That’s 42,000 miles per hour.
Thanks to Apollo 18, who also helped with the math for Chorost's post.
Vi Hart is Khan Academy's professional mathemusician. (Yeah, I KNOW, right?) And, this year, she's making the most delightfully nerdy Thanksgiving dinner ever.
It begins with green bean matherole, topped with fried Borromean onion rings. But, besides the fact that it's finished with crispy, delicious hyperbolic geometry, what makes the matherole a matherole?
Vectors. Like the rings, vectors are part of geometry. They've got a magnitude (think: size of the green bean) and they've got a direction (think: which way the green bean is pointing). Most importantly, a single vector can be part of a field of vectors. And that, my friends, is an excellent starting point for a 9 x 13 pan full of beans.
Yes, it's $72. But this 3-D printed metal sculpture/bottle opener is fantastic. And so is its marketing copy.
The problem of beer That it is within a 'bottle', i.e. a boundaryless compact 2-manifold homeomorphic to the sphere. Since beer bottles are not (usually) pathological or "wild" spheres, but smooth manifolds, they separate 3-space into two non-communicating regions: inside, containing beer, and outside, containing you. This state must not remain.
Read the rest of the product description and, you know, maybe buy the bottle opener, too. If you're feeling spendy.
Via Cliff Pickover
Chad Orzel's post, "Financiers Still Aren’t Rocket Scientists" is a timely reminder that Mitt Romney and other Wall Street Types are not, by and large, superhero math geniuses with their fingers on the arcane numeric truths underpinning all reality. Some quants are genuinely impressive mathematicians, but the industry's reputation for "numbers guys," is just wrong-o.
Financiers Still Aren’t Rocket Scientists (via Making Light)
You would think that the 2008 economic meltdown, in which the financial industry broke the entire world when they were blindsided by the fact that housing prices can go down as well as up, might have cut into the idea of Wall Street bankers as geniuses, but evidently not. The weird idea that the titans of investment banking are the smartest people on the planet continues to persist, even among people who ought to know better– another thing that bugged me about Chris Hayes’s Twilight of the Elites was the way he uncritically accepted the line that Wall Street was the very peak of the meritocracy. It’s not hard to see where it originates– Wall Street types can’t go twenty minutes without telling everybody how smart they are– but it’s hard to see why so many people accept such blatant propaganda without question.
Look, Romney was an investment banker and corporate raider at Bain Capital. This is admittedly vastly more quantitative work than, say, being a journalist, but it doesn’t make him a “numbers guy.” The work that they do relies almost as much on luck and personal connections as it does on math– they’re closer to being professional gamblers than mathematical scientists. This is especially true of Bain and Romney, as was documented earlier this year– Bain made some bad bets before Romney got there, and was deep in the hole, and he got them out in large part by exploiting government connections and a sort of hostage-taking brinksmanship, creating a situation in which their well-deserved bankruptcy would’ve created a nightmare for the people they owed money, which bought them enough time for some other bets to pay off.
Romney has no shortage of nerve, and while he creeps me out, he has the sort of faux charm that works well in the finance community. But he’s not a “numbers guy” in any sense that looks meaningful from over here in the land of science. He can do the math needed to add up his personal fortune, but the game that he made his money playing isn’t a rigorously mathematical one– people get rich in finance as much by playing hunches and cutting sharp deals as by crunching numbers. There are people who make their way in that business by taking a rigorously data-driven approach to investing– one of the many things I need to write up for the blog at some point is a review of a forthcoming book called The Physics of Wall Street– but they’re nowhere near a majority of the industry, and Romney’s not one of them.
The election is next week. And, with that in mind, Salon's Paul Campos has posted a helpful reminder explaining what the statistics at the fivethirtyeight blog actually mean (and what they don't).
In particular, you have to remember that, while Nate Silver gives President Obama a 77.4 percent chance of winning the presidential election, that's not the same thing as saying that Obama is going to win.
Suppose a weather forecasting model predicts that the chance of rain in Chicago tomorrow is 75 percent. How do we determine if the model produces accurate assessments of probabilities? After all, the weather in Chicago tomorrow, just like next week’s presidential election, is a “one-off event,” and after the event the probability that it rained will be either 100 percent or 0 percent. (Indeed, all events that feature any degree of uncertainty are one-off events – or to put it another way, if an event has no unique characteristics it also features no uncertainties).
The answer is, the model’s accuracy can be assessed retrospectively over a statistically significant range of cases, by noting how accurate its probabilistic estimates are. If, for example, this particular weather forecasting model predicted a 75 percent chance of rain on 100 separate days over the previous decade, and it rained on 75 of those days, then we can estimate the model’s accuracy in this regard as 100 percent. This does not mean the model was “wrong” on those days when it didn’t rain, any more than it will mean Silver’s model is “wrong” if Romney were to win next week.
What Silver is predicting, in effect, is that as of today an election between a candidate with Obama’s level of support in the polls and one with Mitt Romney’s level of support in those polls would result in a victory for the former candidate in slightly more than three out of every four such elections.
Guangzhou's Utopia Design created this Fibonacci Cabinet, whose drawers are scaled according to ratios from the Fibonacci sequence.
The peer-reviewed journal Advances in Pure Mathematics was tricked into accepting a nonsense math paper that was generated by a program called Mathgen.
To be fair, the journal did note several flaws in the paper, such as "In this paper, we may find that there are so many mathematical expressions and notations. But the author doesn’t give any introduction for them. I consider that for these new expressions and notations, the author can indicate the factual meanings of them," and requested that they be corrected prior to publication.
However, the "author" of the paper replied with a set of pat rebuttals ("The author believes the proofs given for the referenced propositions are entirely sufficient [they read, respectively, 'This is obvious' and 'This is clear']" and these were seemingly sufficient for the editors.
Sadly, the paper wasn't published, as the "author" wasn't willing to pay the $500 peer-review fee.
On August 3, 2012, a certain Professor Marcie Rathke of the University of Southern North Dakota at Hoople submitted a very interesting article to Advances in Pure Mathematics, one of the many fine journals put out by Scientific Research Publishing. (Your inbox and/or spam trap very likely contains useful information about their publications at this very moment!) This mathematical tour de force was entitled “Independent, Negative, Canonically Turing Arrows of Equations and Problems in Applied Formal PDE”, and I quote here its intriguing abstract:
Let ρ=A. Is it possible to extend isomorphisms? We show that D′ is stochastically orthogonal and trivially affine. In , the main result was the construction of p-Cardano, compactly Erdős, Weyl functions. This could shed important light on a conjecture of Conway-d’Alembert.
This is a nice follow-on from the Sokal hoax, wherein a humanities journal was tricked into accepting a nonsense paper on postmodernism. Goes to show that an inability to distinguish nonsense from scholarship exists in both of the two cultures.