Human hair as a computer interface

UC Berkeley researcher and artist Eric Paulos and his students continue their explorations of "cosmetic computing" with a new prototype and paper about "Human Hair as Interactive Material." If you'd like to coif your own computational locks, they've posted a how-to guide on Instructables. From their research page:

Human hair is a cultural material, with a rich history displaying individuality, cultural expression and group identity. It is malleable in length, color and style, highly visible, and embedded in a range of personal and group interactions. As wearable technologies move ever closer to the body, and embodied interactions become more common and desirable, hair presents a unique and little-explored site for novel interactions. In this paper, we present an exploration and working prototype of hair as a site for novel interaction, leveraging its position as something both public and private, social and personal, malleable and permanent. We develop applications and interactions around this new material in HäirIÖ: a novel integration of hair-based technologies and braids that combine capacitive touch input and dynamic output through color and shape change. Finally, we evaluate this hair-based interactive technology with users, including the integration of HäirIÖ within the landscape of existing wearable and mobile technologies.

For more, please listen to Mark Frauenfelder and I interview Eric about Cosmetic Computing in this episode of For Future Reference, a podcast from Institute for the Future:

Read the rest

Clever app-controlled analog split-flap display for your home or office

Vestaboard is a clever app-controlled version of the old split-flap display that you'd see in train stations of yore. It's the same electromechanical analog display technology used in old flip alarm clocks, but with Vestaboard you change the text using your mobile device. Below is a demo of one of the display "bits" in motion and another video teaser of the full sign, measuring 37" x 21". You can pre-order one for $1800.

Read the rest

Pixelsynth: fun Web instrument translates images into electronic music

Pixelsynth is a lovely and compelling Web app by Olivia Jack that enables you to easily turn your own images into weird electronic music and tweak the tones (and graphics) in real time. PIXELSYNTH (via Waxy)

Read the rest

Voice and gesture interface from 1979!

In 1979, MIT professor Christopher Schmandt and colleagues developed "Put That There," a voice and gesture interactive system, in the Architecture Machine Group (that later evolved into the famed MIT Media Lab). In this video, a researcher demonstrates the system while sitting comfortably in a stylish Eames Lounge Chair. From a 1982 paper about the project (PDF):

(Put That There) allows a user to build and modify a graphical database on a large format video dis- play. The goal of the research is a simple, conversational interface to sophisticated computer interaction. Natural language and gestures are used, while speech output allows the system to query the user on ambiguous input.

This project starts from the assumption that speech recognition hardware will never be 100% accurate, and explores other techniques to increase the use- fulness (i.e., the "effective accuracy") of such a system. These include: redundant input channels, syntactic and semantic analysis, and context- sensitive interpretation. In addition, we argue that recognition errors will be more tolerable if they are evident sooner through feedback and easily corrected by voice.

(Thanks, Dustin Hostetler!)

Read the rest

Chris Noessel: lessons of science fiction computer interfaces

What can Logan's Run, Star Wars: Attack of the Clones, and Shrek teach us about how not to design computers? Veteran interaction designer and author Chris Noessel explored these very questions in his Boing Boing: Ingenuity presentation and book, Make It So: Interaction Design Lessons from Science Fiction.

The Beauty of Bones (and Skinput)

I've always been a fan of anything that uses the concept of bone conduction. A friend who worked as a field medic for public protests years ago told me that he'd often diagnose and locate bone fractures by taking a tuning fork, striking it, and holding it against the limb in question--the sound would travel up and down the bone and cause a stronger 'sensation' (ouch!) wherever there was any sort of a break in continuity. Now, thanks to research being done at Carnegie Mellon and Microsoft, you can use this same basic technology to play tetris!

The video has a more in-depth demonstration, but the idea is based on the fact that our bodies are pretty effective conductors of minute acoustical information, so vibrations from something like a tap on the forearm or fingertips can be picked up by a bio-acoustic sensor positioned somewhere else along the arm. Because every part of the body is composed of specific combinations of different kinds of tissues with various densities, every location hypothetically has a signature resonance that can be tracked.

While it's still in development, they're already teaming the technology up with wearable pico projectors. I think it's really interesting for the future of AR, in terms of creating the ultimate ephemeral user-interface--Tablets are SO 2010. Read the rest