Detect your pulse with your webcam

Thearn released a free/open program for detecting and monitoring your pulse using your webcam. The code is on github for you to download, play with and modify. If this stuff takes your fancy, be sure and read Eulerian Video Magnification for Revealing Subtle Changes in the World, an inspiring paper describing the techniques Thearn uses in his code:

This application uses openCV ( to find the location of the user's face, then isolate the forehead region. Data is collected from this location over time to estimate the user's heartbeat frequency. This is done by measuring average optical intensity in the forehead location, in the subimage's green channel alone. Physiological data can be estimated this way thanks to the optical absorbtion characteristics of oxygenated hemoglobin.

With good lighting and minimal noise due to motion, a stable heartbeat should be isolated in about 15 seconds. Other physiological waveforms, such as Mayer waves (, should also be visible in the raw data stream.

Once the user's pulse signal has been isolated, temporal phase variation associated with the detected hearbeat frequency is also computed. This allows for the heartbeat frequency to be exaggerated in the post-process frame rendering; causing the highlighted forhead location to pulse in sync with the user's own heartbeat (in real time).

Support for pulse-detection on multiple simultaneous people in an camera's image stream is definitely possible, but at the moment only the information from one face is extracted for cardiac analysis

thearn / webcam-pulse-detector (via O'Reilly Radar)


  1. Once I learn the trick to reading other people’s pulses from facial cues, my poker game will improve significantly.

  2. This is amazing work & as a signals processing guy I really like it. But there is one danger when looking at the videos.  According to the 3rd paragraph quoted above, they isolate the pulse signal and then exaggerate the colors — not the other way around. It’s tempting to see those exaggerated colors and think “wow, that’s so easy to extract the pulse from, it must be reliable” .. but it’s more a visualization & verification of data than raw source data. I suspect if you amplified the wrong signal you wouldn’t get such uniformity across the forehead … but I don’t really know – it would be interesting to see how and when this analysis breaks down. I’m looking forward to reading up on this tonight!

    1. I’m the programmer of the linked code (thearn), I actually had the exact same thoughts when I reviewed the original work from MIT. 

      post-process amplification of phenomena detected from an analysis is pretty neat looking, but only as reliable as the analysis itself.

  3. I’m working with(/for) the original paper’s authors to develop an Android app that does the same thing, but also a lot more. 

    There are more advanced algorithms that haven’t been published yet which we hope to implement for even better and cleaner results.

    1. iOS has had an app with this out for a while, right? When I looked on Android, I only found apps that ask you to place your finger over the camera, and they generally worked very poorly. Is the app any harder to develop for Android, for any reason?

      1. I wouldn’t know anything about relative difficulty to code, having never coded on iOs. My code isn’t meant for measuring pulse, but it is a feature. My app is meant to be a general purpose color and motion amplification app. As of now I have the basic color amplification algorithm (as presented in the linked paper) working, but the UX is currently garbage, and that’s something I have little experience with. Also, like I said, there are some more advanced algorithms that have shown to have much better results (more robust, less noise, etc.) that haven’t been published yet that I plan to implement after I get the UX down.

  4. Back in 2009, i wrote my MSc dissertation on this technique, and developed it using the method in Takano, 2007

Comments are closed.