Computer-human interaction that gives hands their due respect

Bret Victor's "Brief Rant on the Future of Interaction Design" is an eye-opening look at the poverty of the current options for computer-human interaction. Victor argues that our hands have enormous expressive range and sensitivity, but our devices accept only pokes and swipes from them, and only provide feedback in the form of a little bit of haptic buzzery. He persuasively argues that expanding the repertoire of I/O using hands will produce interfaces that are richer and better:

Notice how you know where you are in the book by the distribution of weight in each hand, and the thickness of the page stacks between your fingers. Turn a page, and notice how you would know if you grabbed two pages together, by how they would slip apart when you rub them against each other.

Go ahead and pick up a glass of water. Take a sip.

Notice how you know how much water is left, by how the weight shifts in response to you tipping it.

Almost every object in the world offers this sort of feedback. It's so taken for granted that we're usually not even aware of it. Take a moment to pick up the objects around you. Use them as you normally would, and sense their tactile response — their texture, pliability, temperature; their distribution of weight; their edges, curves, and ridges; how they respond in your hand as you use them.

There's a reason that our fingertips have some of the densest areas of nerve endings on the body. This is how we experience the world close-up. This is how our tools talk to us. The sense of touch is essential to everything that humans have called "work" for millions of years.

A Brief Rant on the Future of Interaction Design (via Making Light)


  1. Geez Bret, have some patience. Considering that the entire computer industry and culture as we know it is only a littler older than I am, considering that the touchscreen technology in phones and tablets was only a scifi topic 10 years ago, considering that some devices must actually be made larger than necessary just so they can be usable to our hands, considering that cycles of obsolescence are measured in months to years, I honestly think its a bit disingenuous to get all worked up about how design is failing us.

  2. I for one, agree that belling the cat is an excellent solution to the problems we mice face! Now who will do it?

  3. When it comes to human-computer interaction I would rather see a more direct connection to the brain. For some activities I can see touch based interfaces being  fun and somewhat useful but for the activities most people complete on a regular basis a sensor reading directly from the brain or nerves leading from the brain would be much faster and convenient. Additionally, this kind of research has benefits for people that could desperately use any advances in this technology.

    Then again the thought police are a terrifying prospect.

  4. From the article:

    “With an entire body at your command, do you seriously think the Future Of Interaction should be a single finger?”

    As a friend reminded me, what if a finger’s all you have?

    1. Then the same system of people and groups that keeps you alive at the moment by operating everything else in the world that needs more than a single finger (food, shelter etc etc) can operate this for you too, and you’ll have your own special one-finger system for some things.

  5. Amen for pens! Thank goodness I have room on my desk for a Magic Trackpad, a Magic Mouse *and* a Wacom. Maybe I should get a SpaceNavigator, too.

    But thus far touchscreens have no competition in terms of flexibility (except, perhaps, distantly, traditional pointing devices). I can either have my desk look like a 1960s power plant control station or bring an iPad—and go places.

  6. It’s all very true…. though for my occupation I work with biohazards all day and am usually gloved in gore… I am much more excited by the future possibilities of simply dictating commands to my computer (whatever shape it comes in)

  7. I’m struggling to think of realistic, concrete examples to illustrate what he’s talking about.

    My own computing experience doesn’t seem to be particularly awash with glasses of water and sandwiches that I need to manipulate.

  8. A pressure-sensitive (Wacom-like) art pen and app  for the iPad would be sweet. The pen would have to send pressure and angle data to the iPad. Should be possible, no?

    1. Bluetooth would probably be the way to go, but I’m not sure how much of the BT stack is implemented on the iPhone… I know that you can hook up a BT keyboard, so serial data like this should be do-able.

      All else fails, I *suppose* a dock-connector cable would work to get the pen data into the phone. There are plenty of products out there that use the dock connector for data collection. But a pen tethered to the phone with a cable seems a bit… kludgy.

  9. I read this yesterday over at the blue and got my snicker on about how DUMB phone designers are, and on the way to work this morning realized that smart phones most certainly DO require fine motor skills to manipulate. The “pictures under glass” UI is only one aspect of using the device. As far as an iPhone goes, the haptic design is so well understood by the designers of the phone that you completely miss how you hold the phone is various ways to accomplish different tasks, such as locking, volume control, shuffle, etc. This article only focuses on the “glass touch” part and says “neener neener” without even taking one hard look at how the device itself is held and handled by the user. I think it is a great piece of “read me!!!!” but falls apart quickly after that.

  10. I’m an interaction designer by trade, so I feel I can actually comment on this with some vague level of authority (novel).

    The thing with interaction design is that it’s mostly about completing tasks; people want to do X so they can get to Y.  A simple way of looking at it (and it is over simplified) is that you need to get someone from A to B with as few interactions as possible.  Clicking on your left mouse button on a metaphor for a button is quick and easy; cognitively and physically it works a treat.  It’s quick, easy, and the great thing about I/O is that you can achieve it with no hands, or even feet; it’s hugely accessible.

    Interfaces (generally) involve pushing buttons; just like in the real world; and that is an I/O task, no need for caressing knobs and feeling the weight of your bank balance – because you want to complete tasks and you want to complete them efficiently.

    In the realm of fun and entertainment then sure this has some weight; but we’ve known this for decades and that’s why good VR environments have tactile feedback and novel interfaces – this isn’t the future of interaction design, it’s part of its past and present too.  Even PS2 controllers had buttons and analogue sticks that were definably not I/O.

    {EDIT} Roboton makes a great point above, too. When using any interactive device your hands are doing loads of crazy shit. I think the author is imaging someone using an iPhone by laying it down and prodding at it with a clenched fist and a single finger extended. An iPhone is very tactile and intuitive (not that I’d like to use one all day for work), but even with all these novel controls, swipes and gestures, when you want to go somewhere or action something, you likely still just want to jab at a big button.

    1. When you put it like this, I wonder if I/O stands for “input/output” in your sentence, or for “one/zero”?

  11. People don’t want pens or gesture input devices.

    A long, long time ago, like in the 1980’s, when the PC came out, its graphics were crappy. But people bought them anyway. Then a new graphics board came out with 640 x 480 pixels in 16 colours and people bought them in large numbers…and demanded more. Then a new board came out with 800 x 640 pixels but in monochrome. And people bought it in large numbers and demanded more. Everything an improvement in graphics came out, people bought it in large numbers and demanded more.

    A long, long time ago, like in the 1980’s, when the PC came out, there were pen devices, data gloves, and voice recognition systems. And like the graphics of the time, they were crappy. But unlike the graphics, they were ignored.

    Conclusion: people don’t want pen, gesture, or voice input devices. If they did, they would already be available.

    1. Well they are available, they’re just not widely used.  Which is an even bigger indication that users want simplistic input, not a gruelling physical task.

      Also, another point I forgot to mention in my previous comment:  Interaction Designers don’t design input devices, engineers do.  Interaction Designers design interactions based on the input types being used.  Ultimately the future of interaction design is defined by whoever is making the device that people are interacting with.

    2. Nobody demanded 90% of the pointless increases in graphic complexity.  They were dumped on people from on high and you were told to like it or leave.  The PC industry was, above all, a fashion industry that randomly changed things to generate revenue in the same way the hemlines went up and down.  Did we really need an entire WIN-NT upgrade cycle to get “drop shadows” on windows?  How much graphics power do you need to run a spreadsheet?  I suppose people “demanded” beige boxes too.

  12. Peeling a banana? Launch Michael Jackson’s Thriller in windows media player.
    Cup a handful of walnut pieces? Hibernate
    Clench a robe? Print document
    hold a stick? copy/paste

    Damn these stupid finger push peripherals!

  13. Yeah, I immediately thought of the haptic gloves in Player One.  Interesting essay, but it does overlook the degree to which the depicted cards and tablets fit with our current comfort with paper and, more importantly, our desire to be able to pick up and put down such objects at will (something that doesn’t work with haptic gloves) or to carry it around easily (hard for a 3-d input device for a  desktop machine)…

  14. speaking of fingertips and feedback, the owner of those in the photograph should consider seeing a doctor if they haven’t already…. that’s wikipedia grade finger clubbing.

  15. I’ve been thinking about this for some time, but from another angle.  This is a work in progress, but see the videos for an example of an old analogue technology, tied to new digital technology, reinterpreted back onto the original tech for a great combination of the two, it’s my prototype DJ setup, 2x12100s, a mixer and serato scratch live.

    first overview:

    Changing display for TT:

    TT projection on white vinyl(No interaction):

    A word on the software,  the DJ program is Scratch live, it’s a digital vinyl system, this means it uses control vinyl records, the signal of which is converted to audio by a soundcard, letting the DJ play digital music as if it were on the record itself, with all the tricks that allows.  This is how I get the projection displaying the track information.

    The pprojection is handled with Quartz Composer (+ Mix emergency for the second two).  I used the quad warp patch from
    and screencapture

    I’ve got lot more ideas for the project, hopefully putting the laptop to the corner for a fully improved, but still old skool DJ setup I can actually play out.

  16. BTW – The pen shown is an example of non-design.  It’s just a simple stupid cylinder made that way because that’s the easiest shape to turn out on a lathe.  See the Fjader pen for a design that actually takes the human hand into consideration.

  17. Desktop computer interface “design” has been one of the greatest failures in human history and didn’t progress at all, while actually going backwards in several cases such as the horrendously bad “mouse” for a pointing device, for many decades until the introduction of the iPod.  Take a look at the equipment and paradigms available to gamers for a sad, sad comparison.  The simple addition of a foot pedal, for those Ctl-Alt-Command-Shift-Delete commands that are oh-so-common, was  beyond the comprehension of the non-designers involved.  These are the people that gave us the beige box.

    Even worse than the hardware design, which stayed completely static for decades, is the devolution of the software interfaces.  The steampunk boys knew that the human brain remembers radial displacements and not XY ones: hence dials and knobs and not sliders and arrays of buttons.  The vacuum tube radar guys knew that green text on a black background was the correct way to reduce eye-strain.  What we got, instead, was pull-down menus (usually with “Destroy All” next to “Save Work?”) because that was the laziest way possible to code it and black text on a glaring white background guaranteed to cause headaches and permanent eye-damage.

    The horrors of interfaces such as Word, which manages to violate just about every known ergonomics rule, are well documented elsewhere.  And let’s not even start on the “Press any key to continue; press any other key to stop” error messages.

    “While I sat, weak and weary,
      Pondering code, dark and eerie,
      Thus did the phosphors say and nothing more:
      ‘Abort?  Retry?  Ignore?'”

  18. “Take a moment to pick up the objects around you. Use them as you normally would, and sense their tactile response — their texture, pliability, temperature; their distribution of weight; their edges, curves, and ridges; how they respond in your hand as you use them.”

    I tend to do that, right after I partake in some heniously good weed.

  19. Tools are objects that extend our will and more.

    Some see them merely as a means to an end.
    Others are able relate to them to a different degree.

    A good tool makes life easier and more efficient.

    A good tool can make you more than you are.

    A good tool can connect you with the material world as well as the cosmos.

    Tools can be designed to satisfy personal preferences as well as perform a given task or tasks.
    Ergonomics and aesthetics.

    You can tell a lot about a person just by looking at their tools and the manner in which they use them and respect them.

    Pirsig understood.

    ” I’ve noticed that people who have never worked with steel have trouble seeing this… that the motorcycle is primarily a mental phenomenon. They associate metal with given shapes… pipes, rods, girders, tools, parts… all of them fixed and inviolable, and think of it as primarily physical. But a person who does machining or foundry work or forge work or welding sees “steel” as having no shape at all. Steel can be any shape you want if you are skilled enough, and any shape but the one you want if you are not. “

  20. I, too, work in interaction design. And I’ve done work off and on in gestural UIs. I put together a book on Virtual Reality back in the day and spent more hours than I care to admit working with a variety of glove- and vision-based gesture systems. Did my MS thesis on natural gestures.  Take all that for what it’s worth.

    For the most part, gestural UIs are crap. I agree, pictures under glass aren’t like real objects, but that’s probably OK. The video doesn’t really require most of the people in it to deal with physical objects.  Maybe people would be interested in some of the work that has been done in making real-world objects computationally aware – I can’t help but prop my friend and former colleague Hiroshi Ishii ( and his Tangible Media Group.

    But again that’s not working with “hands” per se – it’s about making the objects (the hammer, for example) into computationally interesting devices that people use naturally because it’s an object, not because we’re making UIs for hands.

Comments are closed.