Turn any hard surface into a touch interface with a contact mic

Bruno Zamborlin, a PhD candidate at IRCAM/Centre Pompidou and Goldsmiths, University of London, and Norbert Schnell, a Centre Pompidou researcher, created this astounding demo of using a contact microphone to turn any hard surface into a touch interface. The microphone detects the vibrations from your touches and figures out what kind of touch you're engaged in and what you're touching with, and translates that into music. Don't miss the balloon demo at the end.

Through gesture recognition techniques we detect different kind of fingers-touch and associate them with different sounds.

In the video we used two different audio synthesis techniques:

1- physics modelling, which consists in generating the sound by simulating physical laws;

2- concatenative synthesis (audio mosaicing), in which the sound of the contact microphone is associated with its closest frame present in a sound database.

The system can recognise both fingers-touches and objects that emit a sound, such as the coin shown in the video.

Mogees (via Kottke)


  1. Does it do something more than simply translate audio levels to midi signals? That’s easy peazy, so I’m hoping its not simply that. But from the video, I’m not seeing evidence that actual proximity is being detected, just relative volume. And with a single sensor, I don’t see how its possible to get the kind of resolution we expect from a “touch interface”. I’m thinking gimmick, but am certainly open to being wrong.

    But with an explanation like “physics modelling, which consists in generating the sound by simulating physical laws”, I’m thinking they should drop the PhD and go into marketing.

    1. you potentially get more resolution than you do from “touch” which is typically sampled in the hundred Hz range. the data from mic is analog and likely sampled in the tens of thousands of Hz range. that leaves the resolution of the mic itself – even a cheap  contact mic is at least as high resolution as any existing touch sensor. so finally, its all down to software … and no, its not eazy-peazy, though its also not super hard. the gestural aspects are the hard bit, and the video doesn’t make it clear how much subtlety there really is.

      your second paragraph suggests a distinct lack of knowledge about sound synthesis.

      1. Your second paragraph suggests a distinct affinity for marketing speak and artist statement mumbo jumbo…

        Are you involved with this project? If so I’d love to see a more realistic demo of what exactly is new here. Lets see the software interface and how its doing something other than transforming audio levels to midi signals. Until then, I call shenanigans.

        1. i’m not involved in it. i am deeply involved in various audio software projects.

          “physical modelling synthesis” is a phrase that has been around for at least 15 years and is commonly used in the field. and yes, it works by running a simulation of a physical object subject to physical laws, with the best stuff simplifying as little as possible.

          but none of that has anything to do with the microphone or the analysis of the signals from the microphone, certainly not directly. however, physical modelling synthesis has a much more interesting set of parameters that control the generation of sound, and so its a much more productive target for the data that you get from high resolution, high sample rate sensing.

          most non-physical modelling synths have relatively few control parameters and most of them map to the sound production in rather not-immediately intuitive ways. by contrast, a physical modelling synth has parameters like “skin tightness” or “coupling strength” and if the controlling device can modify these directly, the result is often much more intuitively “musical” than if you simply send MIDI notes and CC data to a non-physical synth.

          you might get more of a sense of this sort of thing from this video:

          (this is now being turned into a product by madrona labs)

    1. the use of contact mics to trigger sound is not new, certainly. what is new in this project is the notion of analyzing the signal from the mic with enough sophistication so as to be able to detect gestures rather than just trigger events.

  2. Further note on electronic drums, cheap electronic drums. While not the same thing – a surface, a couple of flat rigid objects, a contact mic and a small amp.

  3. if it’s just triggering a sample, the resolution of the mic’s input doesn’t matter. similar to the way openCV works with a camera and touch screen application, im assuming the threshold can be adjusted which essentially means ANY signal, no matter how garbled will trigger the sounds we’re hearing…the obvious differences in a scratching sound as opposed to a tapping/clanging/thumping sound can be turned into “gestures” so to speak. in this case, the same can be said for velocity. but i cant imagine it’s truly gestural, which would imply that any old contact mic has the ability to sense minute differences in proximity and position and translate that arbitrary signal into data. if the software is robust enough, then anything is possible… so whatevs. have at it!

  4. Nobody played the Super Mario Bros’ theme on it so I’m not sure if I’m supposed to like it or not. somebody help!

  5. How is this any different that what John Cage and David Tudor were doing in the 60’s?
    I believe this technique was also used by Merce Cunningham in his choreographies. 

    Seems like a modernised version of what’s already been here for 50+ years. 

Comments are closed.