Boing Boing 

Amazing, invisible work that goes on when you click an HTTPS link


Jeff Moser has a clear, fascinating enumeration of all the incredible math stuff that happens between a server and your browser when you click on an HTTPS link and open a secure connection to a remote end. It's one of the most important (and least understood) parts of the technical functioning of the Internet.

People sometimes wonder if math has any relevance to programming. Certificates give a very practical example of applied math. Amazon's certificate tells us that we should use the RSA algorithm to check the signature. RSA was created in the 1970's by MIT professors Ron *R*ivest, Adi *S*hamir, and Len *A*dleman who found a clever way to combine ideas spanning 2000 years of math development to come up with a beautifully simple algorithm:

You pick two huge prime numbers "p" and "q." Multiply them to get "n = p*q." Next, you pick a small public exponent "e" which is the "encryption exponent" and a specially crafted inverse of "e" called "d" as the "decryption exponent." You then make "n" and "e" public and keep "d" as secret as you possibly can and then throw away "p" and "q" (or keep them as secret as "d"). It's really important to remember that "e" and "d" are inverses of each other.

Now, if you have some message, you just need to interpret its bytes as a number "M." If you want to "encrypt" a message to create a "ciphertext", you'd calculate:

C ≡ Me (mod n)

This means that you multiply "M" by itself "e" times. The "mod n" means that we only take the remainder (e.g. "modulus") when dividing by "n." For example, 11 AM + 3 hours ≡ 2 (PM) (mod 12 hours). The recipient knows "d" which allows them to invert the message to recover the original message:

Cd ≡ (Me)d ≡ Me*d ≡ M1 ≡ M (mod n)

The First Few Milliseconds of an HTTPS Connection (via O'Reilly Radar)

Cracking passwords with 25 GPUs


Security Ledger reports on a breakthrough in password-cracking, using 25 graphics cards in parallel to churn through astounding quantities of password possibilities in unheard-of timescales. It's the truly the end of the line for passwords protected by older hashing algorithms and illustrates neatly how yesterday's "password that would take millions of years to break" is this year's "password broken in an afternoon," and has profound implications for the sort of password hash-dumps we've seen in the past two years.

A presentation at the Passwords^12 Conference in Oslo, Norway (slides available here), has moved the goalposts, again. Speaking on Monday, researcher Jeremi Gosney (a.k.a epixoip) demonstrated a rig that leveraged the Open Computing Language (OpenCL) framework and a technology known as Virtual Open Cluster (VCL) to run the HashCat password cracking  program across a cluster of five, 4U servers equipped with 25 AMD Radeon GPUs and communicating at  10 Gbps and 20 Gbps over  Infiniband switched fabric.

Gosney’s system elevates password cracking to the next level, and effectively renders even the strongest passwords protected with weaker encryption algorithms, like Microsoft’s LM and NTLM, obsolete.

In a test, the researcher’s system was able to churn through 348 billion NTLM password hashes per second. That renders even the most secure password vulnerable to compute-intensive brute force and wordlist (or dictionary) attacks. A 14 character Windows XP password hashed using NTLM (NT Lan Manager), for example, would fall in just six minutes, said Per Thorsheim, organizer of the Passwords^12 Conference.

New 25 GPU Monster Devours Passwords In Seconds [Security Ledger] (via /.)

Computer classes should teach regular expressions to kids

My latest Guardian column is "Here's what ICT should really teach kids: how to do regular expressions," and it makes the case for including regular expressions in foundational IT and computer science courses. Regexp offer incredible power to normal people in their normal computing tasks, and we treat them as deep comp-sci, instead of something everyone should learn alongside typing.

I think that technical people underestimate how useful regexps are for "normal" people, whether a receptionist labouriously copy-pasting all the surnames from a word-processor document into a spreadsheet, a school administrator trying to import an old set of school records into a new system, or a mechanic hunting through a parts list for specific numbers.

The reason technical people forget this is that once you know regexps, they become second nature. Any search that involves more than a few criteria is almost certainly easier to put into a regexp, even if your recollection of the specifics is fuzzy enough that you need to quickly look up some syntax online.

Here's what ICT should really teach kids: how to do regular expressions

Gigapixel images of Charles Babbage's Difference Engine #2


Greg sez, "This project is using a number of computational photography techniques to document Charles Babbage's 'Difference Engine No 2' for the Computer History Museum in Mountain View. There are interactive gigapixel images for the four cardinal views of the device available to view."

Babbage Difference Engine in Gigapixel (Thanks, Greg!)

Orientation video for Bell Labs programmers, 1973

Here's a 1973 orientation video from Bell Labs' Holmdel Computer Center, to get new, budding Unix hackers acquainted with all the different apparatus available to them, and also to let them know which counter to visit to get a different tape loaded onto one of the IBM mainframes.

The Holmdel Computer Center, Part 1 (Thanks, Dan!)

The many stages of writing a research paper

Timothy Weninger recently submitted a research paper to a computer science conference called World Wide Web. On his way to that, he went through 463 drafts. Bear in mind, this paper has only been submitted, not yet accepted, so there's probably even more edits that are still yet to happen. Welcome to the life of a scientist.

In this video, Weninger created a timelapse showing all the different stages of his writing process, as he added graphs and went through cycles of expanding, contracting, and expanding the text. But mostly expanding. The paper grows from two pages to 10 by the end of the video.

[Video Link]

Via Bill Bell

Collaborative critical study of one-line BASIC program written for the Commodore 64


Nick sez,

Remember those BASIC programs you typed into your C64? Now there's a book written about one. And the program is only 1 line. And 10 people wrote this book. As one. And they're not lunatics but teach at MIT and USC and other fancy places. And they even wrote programs to study it.

10 PRINT CHR$(205.5+RND(1)); : GOTO 10 is a book of Critical Code Studies that looks at the code and culture of a 1-line program that ran on the Commodore 64. This book uses that 1-liner to explore BASIC programming culture in the 1980s and to reflect on its role in inspiring programmers to take the next step. By Nick Montfort, Patsy Baudoin, John Bell, Ian Bogost, Jeremy Douglass, Mark C. Marino, Michael Mateas, Casey Reas, Mark Sample and Noah Vawter

10 PRINT CHR$(205.5+RND(1)); : GOTO 10 (Thanks, Nick!)

Charles Babbage's dissected brain


A paper in a 1909 edition of the Philosophical Transactions of the Royal Society of London described the dissection of Charles Babbage's brain. The whole article is on the Internet Archive, from which the Public Domain Review has plucked it.

Babbage himself decided that he wanted his brain to be donated to science upon his death. In a letter accompanying the donation, his son Henry wrote:

I have no objection…to the idea of preserving the brain…Please therefore do what you consider best…[T]he brain should be known as his, and disposed of in any manner which you consider most conducive to the advancement of human knowledge and the good of the human race.

Half of Babbage’s brain is preserved at the Hunterian Museum in the Royal College of Surgeons in London, the other half is on display in the Science Museum in London.

The Brain of Charles Babbage (1909)

End software patent wars by making it always legal to run code on a general-purpose computer - Richard Stallman

Writing in a special Wired series on patent reform, Free Software Foundation founder Richard Stallman proposes to limit the harms that patents do to computers, their users, and free/open development by passing a law that says that running software on a general purpose computer doesn't infringe patents. In Stallman's view, this would cut through a lot of the knottier problems in patent reform, including defining "software patents;" the fact that clever patent lawyers can work around any such definition; the risks from the existing pool of patents that won't expire for decades and so on. Stallman points out that surgeons already have a statutory exemption to patent liability -- performing surgery isn't a patent violation, even if the devices and techniques employed in the operation are found to infringe. Stallman sees this as a precedent that can work to solve the problem. Though it seems to me that it might be easier to define "performing surgery" than "operating a general purpose computer."

This approach doesn’t entirely invalidate existing computational idea patents, because they would continue to apply to implementations using special-purpose hardware. This is an advantage because it eliminates an argument against the legal validity of the plan. The U.S. passed a law some years ago shielding surgeons from patent lawsuits, so that even if surgical procedures are patented, surgeons are safe. That provides a precedent for this solution.

Software developers and software users need protection from patents. This is the only legislative solution that would provide full protection for all.

We could then go back to competing or cooperating … without the fear that some stranger will wipe away our work.

Let’s Limit the Effect of Software Patents, Since We Can’t Eliminate Them

(Image: DSC09309, a Creative Commons Attribution (2.0) image from 25734428@N06's photostream)

Chinook: the story of the computer that beat checkers

Last month, I blogged about Relatively Prime, a beautifully produced, crowdfunded free series of math podcasts. I just listened to the episode on Chinook (MP3), the program that became the world champion of checkers.

Chinook's story is a bittersweet and moving tale, a modern account of John Henry and the steam-drill, though this version is told from the point of view of the machine and its maker, Jonathan Schaeffer, a University of Alberta scientist who led the Chinook team. Schaeffer's quest begins with an obsessive drive to beat reigning checkers champ Marion Tinsley, but as the tale unfolds, Tinsley becomes more and more sympathetic, so that by the end, I was rooting for the human.

This is one of the best technical documentaries I've heard, and I heartily recommend it to you.

Building a computer from scratch: open source computer science course

Here's an absolutely inspiring TED Talk showing how "self-organized computer science courses" designed around students building their own PCs from scratch engaged students and taught them how computers work at a fundamental level.

Shimon Schocken and Noam Nisan developed a curriculum for their students to build a computer, piece by piece. When they put the course online -- giving away the tools, simulators, chip specifications and other building blocks -- they were surprised that thousands jumped at the opportunity to learn, working independently as well as organizing their own classes in the first Massive Open Online Course (MOOC). A call to forget about grades and tap into the self-motivation to learn.

Game of Life with floating point operations: beautiful Smoothlife

Smoothlife (paper, source code is a floating-point version of the old Game of Life, a classic of evolutionary computing and genetic algorithms. By adding floating point math to the mix, Smoothlife produces an absolutely lovely output:

SmoothLife is a family of rules created by Stephan Rafler. It was designed as a continuous version of Conway's Game of Life - using floating point values instead of integers. This rule is SmoothLifeL which supports many interesting phenomena such as gliders that can travel in any direction, rotating pairs of gliders, wickstretchers and the appearance of elastic tension in the 'cords' that join the blobs.

(via JWZ)

Debunking the NYT feature on the wastefulness of data-centers

This weekend's NYT carried an alarming feature article on the gross wastefulness of the data-centers that host the world's racks of server hardware. James Glanz's feature, The Cloud Factory, painted a picture of grotesque waste and depraved indifference to the monetary and environmental costs of the "cloud," and suggested that the "dirty secret" was that there were better ways of doing things that the industry was indifferent to.

In a long rebuttal, Diego Doval, a computer scientist who previously served as CTO for Ning, Inc, takes apart the claims made in the Times piece, showing that they were unsubstantiated, out-of-date, unscientific, misleading, and pretty much wrong from top to bottom.

First off, an “average,” as any statistician will tell you, is a fairly meaningless number if you don’t include other values of the population (starting with the standard deviation). Not to mention that this kind of “explosive” claim should be backed up with a description of how the study was made. The only thing mentioned about the methodology is that they “sampled about 20,000 servers in about 70 large data centers spanning the commercial gamut: drug companies, military contractors, banks, media companies and government agencies.” Here’s the thing: Google alone has more than a million servers. Facebook, too, probably. Amazon, as well. They all do wildly different things with their servers, so extrapolating from “drug companies, military contractors, banks, media companies, and government agencies” to Google, or Facebook, or Amazon, is just not possible on the basis of just 20,000 servers on 70 data centers.

Not possible, that’s right. It would have been impossible (and people that know me know that I don’t use this word lightly) for McKinsey & Co. to do even a remotely accurate analysis of data center usage for the industry to create any kind of meaningful “average”. Why? Not only because gathering this data and analyzing it would have required many of the top minds in data center scaling (and they are not working at McKinsey), not only because Google, Facebook, Amazon, Apple, would have not given McKinsey this information, not only because the information, even if it was given to McKinsey, would have been in wildly different scales and contexts, which is an important point.

Even if you get past all of these seemingly insurmountable problems through an act of sheer magic, you end up with another problem altogether: server power is not just about “performing computations”. If you want to simplify a bit, there’s at least four main axis you could consider for scaling: computation proper (e.g. adding 2+2), storage (e.g. saving “4″ to disk, or reading it from disk), networking (e.g. sending the “4″ from one computer to the next) and memory usage (e.g. storing the “4″ in RAM). This is an over-simplification because today you could, for example, split up “storage” into “flash-based” and “magnetic” storage since they are so different in their characteristics and power consumption, just like we separate RAM from persistent storage, but we’ll leave it at four. Anyway, these four parameters lead to different load profiles for different systems.

a lot of lead bullets: a response to the new york times article on data center efficiency (via Making Light)

One Google query = one Apollo program's worth of computing

Here's a thought:

"It takes about the same amount of computing to answer one Google Search query as all the computing done — in flight and on the ground — for the entire Apollo program."

(Quote from Seb Schmoller’s "Learning technology – a backward and forward look," attributed to Peter Norvig and Udi Mepher of Google on hearing of the death of Neil Armstrong)

I remember hearing that the processor in a singing greeting card had more capacity than all the electronic computers on Earth at the time of Sputnik's launch, though I can't find a cite for it at the moment. Exponential processor improvements are pretty wild.

Learning technology – a backward and forward look (PDF)

(via Memex 1.1)

Automated system to identify and repair potential weak-spots in 3D models before they're printed

"Stress Relief: Improving Structural Strength of 3-D Printable Objects," a paper presented at SIGGRAPH 2012 from Purdue University's Bedrich Benes demonstrated an automated system for predicting when 3D models would produce structural weaknesses if they were fed to 3D printers, and to automatically modify the models to make them more hardy.

Findings were detailed in a paper presented during the SIGGRAPH 2012 conference in August. Former Purdue doctoral student Ondrej Stava created the software application, which automatically strengthens objects either by increasing the thickness of key structural elements or by adding struts. The tool also uses a third option, reducing the stress on structural elements by hollowing out overweight elements.

"We not only make the objects structurally better, but we also make them much more inexpensive," Mech said. "We have demonstrated a weight and cost savings of 80 percent."

The new tool automatically identifies "grip positions" where a person is likely to grasp the object. A "lightweight structural analysis solver" analyzes the object using a mesh-based simulation. It requires less computing power than traditional finite-element modeling tools, which are used in high-precision work such as designing jet engine turbine blades.

New Tool Gives Structural Strength to 3-D Printed Works

Turing and Burroughs: a beatnik SF novel by Rudy Rucker

Rudy Rucker has launched a new novel, Turing & Burroughs, which he describes as a "beatnik SF novel." It's available direct from his site as an ebook, or from the Kindle store, or as a print-on-demand book.

What if Alan Turing, founder of the modern computer age, escaped assassination by the secret service to become the lover of Beat author William Burroughs? What if they mutated into giant shapeshifting slugs, fled the FBI, raised Burroughs’s wife from the dead, and tweaked the H-bombs of Los Alamos? A wild beatnik adventure, compulsively readable, hysterically funny, with insane warps and twists—and a bad attitude throughout.

Turing & Burroughs Out in Ebook and Paperback!

An accountable algorithm for running a secure random checkpoint

Ed Felten presents and argues for the idea of "accountable algorithms" for use in public life -- that is, "output produced by a particular execution of the algorithm can be verified as correct after the fact by a skeptical member of the public."

He gives a great example of how to run a securely random TSA checkpoint where, at the end of each day, the public can open a sealed envelope and verify that the TSA was using a truly fair random selection method, and not just picking people they didn't like the look of:

Now we can create our accountable selection method. First thing in the morning, before the security checkpoint opens, the TSA picks a random value R and commits it. Now the TSA knows R but the public doesn’t. Immediately thereafter, TSA officials roll dice, in public view, to generate another random value S. Now the TSA adds R+S and makes that sum the key K for the day.

Now, when you arrive at the checkpoint, you announce your name N, and the TSA uses the selection function to compute S(K, N). The TSA announces the result, and if it’s “yes,” then you get searched. You can’t anticipate whether you’ll be searched, because that depends on the key K, which depends on the TSA’s secret value R, which you don’t know.

At the end of the day, the TSA opens its commitment to R. Now you can verify that the TSA followed the algorithm correctly in deciding whether to search you. You can add the now-public R to the already-public S, to get the day’s (previously) secret key K. You can then evaluate the selection function S(K,N) with your name N–replicating the computation that the TSA did in deciding whether to search you. If the result you get matches the result the TSA announced earlier, then you know that the TSA did their job correctly. If it doesn’t match, you know the TSA cheated–and when you announce that they cheated, anybody can verify that your accusation is correct.

This method prevents the TSA from creating a non-random result. The reason the TSA cannot do this is that the key K is based on result of die-rolling, which is definitely random. And the TSA cannot have chosen its secret value R in a way that neutralized the effect of the random die-rolls, because the TSA had to commit to its choice of R because the dice were rolled. So citizens know that if they were chosen, it was because of randomness and not any TSA bias.

Accountable Algorithms: An Example

Supercomputer built from Raspberry Pis and Lego


A team of computer scientists at the University of Southampton in the UK created a supercomputer out of 64 Raspberry Pi matchbox Linux-on-a-chip computers and Lego. The team included six year old James Cox, the son of project lead Professor Simon Cox, "who provided specialist support on Lego and system testing."

Here's a PDF with instructions for making your own Raspberry Pi/Lego supercomputer.

Professor Cox comments: “As soon as we were able to source sufficient Raspberry Pi computers we wanted to see if it was possible to link them together into a supercomputer. We installed and built all of the necessary software on the Pi starting from a standard Debian Wheezy system image and we have published a guide so you can build your own supercomputer.”

The racking was built using Lego with a design developed by Simon and James, who has also been testing the Raspberry Pi by programming it using free computer programming software Python and Scratch over the summer. The machine, named “Iridis-Pi” after the University’s Iridis supercomputer, runs off a single 13 Amp mains socket and uses MPI (Message Passing Interface) to communicate between nodes using Ethernet. The whole system cost under £2,500 (excluding switches) and has a total of 64 processors and 1Tb of memory (16Gb SD cards for each Raspberry Pi). Professor Cox uses the free plug-in ‘Python Tools for Visual Studio’ to develop code for the Raspberry Pi.

Professor Cox adds: “The first test we ran – well obviously we calculated Pi on the Raspberry Pi using MPI, which is a well-known first test for any new supercomputer.”

Engineers Build Supercomputer Using Raspberry Pi, Lego [Parity News]

Southampton engineers a Raspberry Pi Supercomputer [Press release]

(Images: Simon J Cox 2012)

Magic: The Gathering is Turing complete


Alex Churchill has posted a way to implement a Turing complete computer within a game of Magic: The Gathering ("Turing complete" is a way of classifying a calculating engine that is capable of general-purpose computation). The profound and interesting thing about the recurrence of Turing completeness in many unexpected places -- such as page-layout descriptive engines -- is that it suggests that there's something foundational about the ability to do general computation. It also suggests that attempts to limit general computation will be complicated by the continued discovery of new potential computing engines. That is, even if you lock down all the PCs so that they only play restricted music formats and not Ogg, if you allow a sufficiently speedy and scriptable Magic: The Gathering program to exist, someone may implement the Ogg player using collectible card games.

A series of Ally tokens controlled by Alex represent the tape to the right of the current head: the creature one step to the right of the head is 1 toughness away from dying, the next one over is 2 toughness from dying, etc. A similar chain of Zombie tokens, also controlled by Alex, represent the tape to the left. The colour of each token represents the contents of that space on the tape.

The operation "move one step to the left" is represented in this machine by creating a new Ally token, growing all Allies by 1, and shrinking all Zombies by one. The details are as follows:

When the machine creates a new 2/2 Ally token under Alex's control, four things trigger: Bob's Noxious Ghoul, Cathy's Aether Flash, Denzil's Carnival of Souls, and Alex's Kazuul Warlord. They go on the stack in that order, because it's Bob's turn; so they resolve in reverse order. The Kazuul Warlord adds +1/+1 counters to all Alex's Allies, leaving them one step further away from dying, including making the new one 3/3. Then Carnival of Souls gives Denzil a white mana thanks to False Dawn (he doesn't lose life because of his Platinum Emperion). Then Aether Flash deals 2 damage to the new token, leaving it 1 toughness from dying as desired. And then the Noxious Ghoul, which has been hacked with Artificial Evolution, gives all non-Allies -1/-1, which kills the smallest Zombie. Depending on whether the smallest Zombie was red, green or blue, a different event will trigger. The machine has moved one step to the left.

If the new token had been a Zombie rather than an Ally, a different Kazuul Warlord and a different Noxious Ghoul would have triggered, as well as the same Aether Flash. So the same would have happened except it would be all the Zombies that got +1/+1 and all the Allies that got -1/-1. This would effectively take us one step to the right.

Magic Turing Machine v4: Teysa / Chancellor of the Spires (via /.)

(Image: Magic the Gathering, a Creative Commons Attribution Share-Alike (2.0) image from 23601773@N02's photostream)

Alan Turing memorial Monopoly set


Last year, I wrote about the hand-drawn Monopoly board that Alan Turing and friends played with at Bletchley Park. Now it's an official set. Chris from Bletchley Park sez,:

Bletchley Park is delighted to officially launch the Alan Turing Monopoly board, developed from a unique original board in the Bletchley Park Museum, hand-drawn by William Newman, son of Turing’s mentor, Max, over sixty years ago.

In this special edition of Monopoly, the squares around the board and revised Chance and Community Chest cards tell the story of Alan Turing’s life along with key elements of the original hand-drawn board, which the great mathematician played on with a young William in the early 1950s – and lost. The board has been developed by the Bletchley Park Trust, William Newman and Winning Moves, which creates new editions of Monopoly.

Read the rest

Adversarial mind-reading with compromised brain-computer interfaces

"On the Feasibility of Side-Channel Attacks with Brain-Computer Interfaces," a paper presented by UC Berkeley and U Geneva researchers at this summer's Usenix Security, explored the possibility of adversarial mind-reading attacks on gamers and other people using brain-computer interfaces, such as the Emotiv games controller.

The experimenters wanted to know if they could forcefully extract information from your brain by taking control of your system. In the experiment, they flashed images of random numbers and used the automatic brain-response to them to make guesses as which digits were in their subjects' ATM PINs. Another variant was watching the brain activity of subjects while flashing the logo of a bank and making a guess about whether the subject used that bank.

I suppose that over time, an attacker who was able to control the stimulus and measure the response could glean a large amount of private information from a victim, without the victim ever knowing it.

Brain computer interfaces (BCI) are becoming increasingly popular in the gaming and entertainment industries. Consumer-grade BCI devices are available for a few hundred dollars and are used in a variety of applications, such as video games, hands-free keyboards, or as an assistant in relaxation training. There are application stores similar to the ones used for smart phones, where application developers have access to an API to collect data from the BCI devices.

The security risks involved in using consumer-grade BCI devices have never been studied and the impact of malicious software with access to the device is unexplored. We take a first step in studying the security implications of such devices and demonstrate that this upcoming technology could be turned against users to reveal their private and secret information. We use inexpensive electroencephalography (EEG) based BCI devices to test the feasibility of simple, yet effective, attacks. The captured EEG signal could reveal the user’s private informa- tion about, e.g., bank cards, PIN numbers, area of living, the knowledge of the known persons. This is the first attempt to study the security implications of consumer-grade BCI devices. We show that the entropy of the private information is decreased on the average by approximately 15 % - 40 % compared to random guessing attacks.

On the Feasibility of Side-Channel Attacks with Brain-Computer Interfaces

DRM-Free logo: like "certified organic" for DRM-free media

Defective by Design -- the Free Software Foundation's campaign against DRM -- has cooked up a new badge for technology, media and devices that are provided without DRM, a kind of "certified organic" logo that lets you know when you're getting stuff that doesn't try to use technology to limit your choices.

New DRM-Free Label (via /.)

The Coming Civil War Over General Purpose Computers

Last month, I gave a talk called "The Coming Civil War Over General Purpose Computing" at DEFCON, the Long Now, and Google. We're going to have a transcript with the slides on Monday, but in the meantime, here's a video of the Long Now version of the talk. Stewart Brand summarized it thus:

Doctorow framed the question this way: "Computers are everywhere. They are now something we put our whole bodies into---airplanes, cars---and something we put into our bodies---pacemakers, cochlear implants. They HAVE to be trustworthy."

Sometimes humans are not so trustworthy, and programs may override you: "I can’t let you do that, Dave." (Reference to the self-protective insane computer Hal in Kubrick’s film "2001." That time the human was more trustworthy than the computer.) Who decides who can override whom?

The core issues for Doctorow come down to Human Rights versus Property Rights, Lockdown versus Certainty, and Owners versus mere Users.

Cory Doctorow: Coming War Against Your Computer Freedom

In the future, we will shout at machines (even more than we do today)


New father Charlie Brooker has caught himself shouting at the machines in his life, given the matter careful consideration, and decided that it's OK -- more than OK, really. For Brooker, the future will involve lots of shouting at machines. Makes me wonder if there isn't something to be said for designing machines that understand why you're shouting at them.

I used to play vertical-scrolling shoot-em-ups in which a blizzard of angry pixels swirled around the screen like a synchronised galaxy impersonating a flock of starlings, accompanied by a melodic soundtrack of pops and whistles apparently performed by an orchestra of frenzied Bop-It machines. But at least then you could press pause. Now I find it hard to cope with seeing a banner ad slowly fading from red to green while the The One Show's on in the background, which is why over the past few weeks I've ratcheted down my engagement with anything not made of wood. There's a baby to attend to, and his old man needs a lie down.

Because the alternative is to surround myself with technology designed specifically for shouting at. And that's the more uplifting feature of the horrible future I pictured for this baby I'm talking about, the baby I vowed to never mention in print because to do so would instantly mark me out as a prick: in the future, we'll have specially designed anguish-venting machines – unfeeling robots wearing bewildered faces for older people to scream into like adult babies, just to let out all the stress caused by constant exposure to yappering, feverish stimuli. Tomorrow will consist of flashing lights and off-the-shelf digital punchbags, consumed by a generation better equipped to deal with it than me, which won't matter because by then I will have withdrawn entirely from the digital world: an old man, enjoying his lie down.

It's OK to shout at machines – in fact, in the future some of us will find it necessary (via Making Light)

(Image: Shout, a Creative Commons Attribution Share-Alike (2.0) image from garryknight's photostream)

Automated baked-goods identification computer vision system

A Japanese point-of-sale system has the native cunning to recognize baked goods of its own accord, a surprisingly tricky computer vision problem:

Brain Corporation has developed a system that can individually identify all kinds of baked goods on a tray, in just one second. A trial has started at a Tokyo bakery store.

This technology was co-developed with the University of Hyogo. This is the world's first trial of such a system in actual work at a cash register.

Bakery goods POS visual recognition system on trial in Tokyo bakery (via DVICE)

Public image, self-image, and women in computer science

Pictured: Actual female programmers at Women 2.0 Startup Weekend, November 2011.

Xeni posted last week about the EU's rather ridiculous "Science: It's a Girl Thing!" video, which was aimed at recruiting girls to science careers and, instead, hit enough vacuous stereotypes of femininity that it ended up seeming like a parody of itself.

This seems like a nice moment to note that the Txchnologist website is currently posting articles in the theme of "Women in Science and Technology". One of those pieces is an interview with Margo Seltzer, an actual female scientist. Dr. Seltzer teaches computer science at Harvard University’s School of Engineering and Applied Sciences. Most science and technology professions have a hard time attracting and retaining women, and computer science is no exception. Only a quarter of employed computer scientists are women. Txchnologist asked Seltzer about her perspective on the problem, and what steps she thinks might help make computer science more female-friendly.

What's interesting about this interview, in light of the "It's a Girl Thing!" flap: Seltzer does think that image—the messages people get about what a computer engineer has to be like—makes a big difference in who decides they want to be a computer engineer. Which is basically the same idea "It's a Girl Thing!" was trying (poorly) to address. Unfortunately, the EU video ended up being all image and no substance, and worse, it added to the image problem by telling people what girls are supposed to be like. (By that video's definition, I am not a lady.)

Instead, Seltzer says, the problem is that computer scientists are portrayed in a negative way that doesn't fit who they really are—whether male or female. If we had a more well-rounded view of the wide variety of people that actually go into computer science, maybe more women could see themselves in that career.

MS: I think the biggest factor is that as a society we’ve done a really, really bad job of marketing what it means to be in software. If you ask somebody, “What does a computer programmer look like?” I think almost everyone in the world will give you the same description—it’s a nerdy guy with no social skills and all he ever wants to do is program. The reality of the situation is very different. But the image that we’ve constructed societally is really pretty dreadful.

You get articles about the problem and articles that discuss it, but you actually don’t get anyone portraying a different image very often. For a long time we’ve joked about the fact that we need an L.A.-Law-type show for computer programmers, where you have young, good looking, really fun, intelligent people who happen to be software engineers.

If you look globally, there are countries where that isn’t the image, and in fact, their numbers are dramatically better. I was recently speaking with some of our Oracle engineers from China and they pretty much have a fifty-fifty split of men and women. And they think it’s sort of odd that we don’t.

Read the rest of Margo Seltzer's interview. It's worth checking out the whole thing. In particular because she points out that this public image problem isn't the only problem. Even in the 21st century, many workplaces set policies that implicitly tell female employees, "You're not really welcome here." Maybe they're the ones who really need reminding that science can be a girl thing?

Image: Pitch Day - Women 2.0 Startup Weekend, a Creative Commons Attribution Share-Alike (2.0) image from adriarichards's photostream

Bruce Sterling on Alan Turing, gender, AI, and art criticism

Bruce Sterling gave a speech at the North American Summer School in Logic, Language, and Information (NASSLLI) on the eve of the Alan Turing Centenary, and delivered a provocative, witty and important talk on the Turing Test, gender and machine intelligence, Turing's life and death, and art criticism.

If you study his biography, the emotional vacuum in the guy’s life was quite frightening. His parents are absent on another continent, he’s in boarding schools, in academia, in the intelligence services, in the closet of the mid-20th-century gay life. Although Turing was a bright, physically strong guy capable of tremendous hard work, he never got much credit for his efforts during his lifetime.

How strange was Alan Turing? Was Alan Turing a weird, scary guy? Let’s try a thought experiment, because I’m a science fiction writer and we’re into those counterfactual approaches.

So let’s just suppose that Alan Turing is just the same personally: he’s a mathematician, an early computer scientist, a metaphysician, a war hero — but he’s German. He’s not British. Instead of being the Bletchley Park code breaker, he’s the German code maker. He’s Alan Turingstein, and he realizes the Enigma Machine has a flaw. So, he imagines, designs and builds a digital communication code system for the Nazis. He defeats the British code breakers. In fact, he’s so brilliant that he breaks some of the British codes instead. Therefore, the second World War lasts until the Americans drop their nuclear bomb on Europe.

I think you’ll agree this counter-history is plausible, because so many of Turing’s science problems were German — the famous “ending problem” of computability was German. The Goedel incompleteness theorem was German, or at least Austrian. The world’s first functional Turing-complete computer, the Konrad Zuse Z3, was operational in May 1941 and was supported by the Nazi government.

So then imagine Alan Turingstein, mathematics genius, computer pioneer, and Nazi code expert. After the war, he messes around in the German electronics industry in some inconclusive way, and then he commits suicide in some obscure morals scandal. What would we think of Alan Turingstein today, on his centenary? I doubt we’d be celebrating him, and secretly telling ourselves that we’re just like him.

Turing Centenary Speech (New Aesthetic)Turing Centenary Speech (New Aesthetic)

(Image: Tsar Bomba mushroom cloud, a Creative Commons Attribution Share-Alike (2.0) image from andyz's photostream)

Universal Turing Machine in 100 punchcards

SE Peeze Binkhorst sez, "100 years ago today, Alan Turing was born. To celebrate, I wrote a Universal Turing Machine in 100 Punchcards. I've uploaded a video to explain a small part of the read head (the Jacquard). One needle is shown out of a total of 28. The needle and anything else in the animation is not part of the Turing Machine, but is part of a machine that reads and executes the program, i.e. a computer I am working on, which is in part explained in this schematic. As the turingloom website is about a program for a Turing Machine and not about a physical Turing Machine, I hope to be excused from the requirement of infinite tape."

Turing and pride in Manchester

Here's the Alan Turing statue in Manchester, decorated with pride for the centenary, taken by Josh R with Jonnie B.

Happy birthday Alan Turing (via Nelson)

Counterpoint: algorithms are not free speech

In the New York Times, Tim Wu advances a fairly nuanced article about the risks of letting technology companies claim First Amendment protection for the product of their algorithms, something I discussed in a recent column. Tim worries that if an algorithm's product -- such as a page of search results -- are considered protected speech, then it will be more difficult to rein in anticompetitive or privacy-violating commercial activity:

The line can be easily drawn: as a general rule, nonhuman or automated choices should not be granted the full protection of the First Amendment, and often should not be considered “speech” at all. (Where a human does make a specific choice about specific content, the question is different.)

Defenders of Google’s position have argued that since humans programmed the computers that are “speaking,” the computers have speech rights as if by digital inheritance. But the fact that a programmer has the First Amendment right to program pretty much anything he likes doesn’t mean his creation is thereby endowed with his constitutional rights. Doctor Frankenstein’s monster could walk and talk, but that didn’t qualify him to vote in the doctor’s place.

Computers make trillions of invisible decisions each day; the possibility that each decision could be protected speech should give us pause. To Google’s credit, while it has claimed First Amendment rights for its search results, it has never formally asserted that it has the constitutional right to ignore privacy or antitrust laws. As a nation we must hesitate before allowing the higher principles of the Bill of Rights to become little more than lowly tools of commercial advantage. To give computers the rights intended for humans is to elevate our machines above ourselves.

I think that this is a valuable addition to the debate, but I don't wholly agree. There is clearly a difference between choosing what to say and designing an algorithm that speaks on your behalf, but programmers can and do make expressive choices when they write code. A camera isn't a human eye, but rather, a machine that translates the eye and the brain behind it into a mechanical object, and yet photos are still entitled to protection. A programmer sits down at a powerful machine and makes a bunch of choices that prefigure its output, and can, in so doing, design algorithms that express political messages (for example, algorithms that automatically parse elected officials' public utterances and rank them for subjective measures like clarity and truthfulness), artistic choices (algorithms that use human judgment to perform guided iterations through aesthetic options to produce beauty) and other forms of speech that are normally afforded the highest level of First Amendment protections.

That is not to say that algorithms can't produce illegal speech -- anticompetitive speech, fraudulent speech -- but I think the right way to address this is to punish the bad speech, not to deny that it is speech altogether.

And while we're on the subject, why shouldn't Frankenstein's Monster get a vote all on its own -- not a proxy for the doctor, but in its own right?

Free Speech for Computers? (via /.)

(Image: Frankenstein Face Vector, a Creative Commons Attribution (2.0) image from vectorportal's photostream)