Submit a link Features Reviews Podcasts Video Forums More ▾

Using gestures to interact with surfaces that don't have screens

A joint Disney Research and CMU team have produced a demo showing gesture controls on a variety of everyday, non-computer objects. The system, called Touché, uses capacitive coupling to infer things about what your hands are doing. It can determine which utensil you're eating your food with, or how you're grasping a doorknob, or even whether you're touching one finger to another or clasping your hands together. It's a pretty exciting demo, and the user interface possibilities are certainly provocative. Here's some commentary from Wired UK's Mark Brown:

Some of the proof-of-concept applications in the lab include a smart doorknob that knows whether it has been grasped, touched, or pinched; a chair that dims the lights when you recline into it; a table that knows if you're resting one hand, two hands, or your elbows on it; and a tablet that can be pinched from back to front to open an on-screen menu.

The technology can also be shoved in wristbands, so you can make sign-language-style gestures to control the phone in your pocket—two fingers on your palm to change a song, say, or a clap to stop the music. It can also go in liquids, to detect when fingers and hands are submerged in water.

"In our laboratory experiments, Touché demonstrated recognition rates approaching 100 percent," claims Ivan Poupyrev, senior research scientist at Disney Research in Pittsburgh. "That suggests it could immediately be used to create new and exciting ways for people to interact with objects and the world at large."

Disney researchers put gesture recognition in door knobs, chairs, fish tanks

Robotic rings turn your fingers into a face

Keio University's robotics group have demonstrated a set of remotely-controlled facial elements designed to be worn as rings. These could be directly controlled by the wearer, or could be remotely controlled by a piece of software that was portraying a character that inhabited your hand like a sock-puppet or Senor Wences.

"First of all, this device resembles a toy. So we want to make it more like a character, like when children or their parents play finger games. That would enable a new form of interactive play. We'd also like to incorporate this robot into the way children use their hands to communicate with each other."

Robotic rings for wearable robotic interaction (via Neatorama)

When human beings are asked to monitor computers, disaster ensues

Ashwin Parameswaran's "People Make Poor Monitors for Computers" is a fascinating look at (and indictment of) the way we design automation systems with human fallbacks. Our highly automated, highly reliable systems -- the avionics in planes, for example -- are designed with to respond well to all the circumstances the designers can imagine, and use human beings as a last line of defense, there to take control when all else fails. But human beings are neurologically wired to stop noticing things that stay the same for a long time. We suck at vigilance. So when complex, stable systems catastrophically fail, so do we. Parameswaran quotes several sources with examples from air-wrecks, the financial meltdown, and other circumstances where human beings and computers accidentally conspired together to do something stupider than either would have done on their own.

Although both Airbus and Boeing have adopted the fly-by-wire technology, there are fundamental differences in their respective approaches. Whereas Boeing’s system enforces soft limits that can be overridden at the discretion of the pilot, Airbus’ fly-by-wire system has built-in hard limits that cannot be overridden completely at the pilot’s discretion.

As Simon Calder notes, pilots have raised concerns in the past about Airbus‘ systems being “overly sophisticated” as opposed to Boeing’s “rudimentary but robust” system. But this does not imply that the Airbus approach is inferior. It is instructive to analyse Airbus’ response to pilot demands for a manual override switch that allows the pilot to take complete control:

"If we have a button, then the pilot has to be trained on how to use the button, and there are no supporting data on which to base procedures or training…..The hard control limits in the Airbus design provide a consistent “feel” for the aircraft, from the 120-passenger A319 to the 350-passenger A340. That consistency itself builds proficiency and confidence……You don’t need engineering test pilot skills to fly this airplane."

David Evans captures the essence of this philosophy as aimed at minimising the “potential for human error, to keep average pilots within the limits of their average training and skills”.

It is easy to criticise Airbus‘ approach but the hard constraints clearly demand less from the pilot. In the hands of an expert pilot, Boeing’s system may outperform. But if the pilot is a novice, Airbus’ system almost certainly delivers superior results. Moreover, as I discussed earlier in the post, the transition to an almost fully automated system by itself reduces the probability that the human operator can achieve intuitive expertise. In other words, the transition to near-autonomous systems creates a pool of human operators that appear to frequently commit “irrational” errors and is therefore almost impossible to reverse.

People Make Poor Monitors for Computers (Thanks, Patrick!)