Chemistry of the future: 3D models and augmented reality

In a very cool video from Chemical and Engineering News, Art Olson of the Scripps Research Institute explains how chemists in his lab can predict how well the drugs they develop will work.

Olson's lab prints 3D models of molecular structures, both targets—like the HIV protease enzyme in the video—and the drugs they've made to bond to those targets. The models are rigged up so that when Olson holds them in front of a webcam, they instantly interact with chemical analysis software his team has built. The result is a system that allows researchers to see, physically, how well the drugs fit their targets, and simultaneously test how well the two are likely to bond on a chemical level.

Thanks, Aaron Rowe!



  1. That’s very convenient and impressive – however it’s not a dramatically different approach to what we were doing with CPK models on the desk in front of (vector!) displays back in the stone age (c1988)

    1. The difference is that now, the energy calculations are much more precise.  And the chicks.  Don’t forget about the chicks.

  2. Why is this a good idea? Or is it just a good promo for fund raising?

    If you have enough data to  perform the 3-D  printing,  you have enough data to render  the molecules in silico. Once  rendered in In silico, the computer can algorithmically  do a much more exhaustive search than a human, and without the stubby fingers getting in the way.  Nor is the analysis confined to the resolution of a web cam.

    Does this lab have any successes with this method that outperformed the obvious virtual approach?

    1.  Of course you are correct.  The same program that we use to compute the
      interaction energy of the manipulated tangible models is used for
      automated computational docking.  Our open source AutoDock code has been
      downloaded by over 30,000 laboratories world-wide and is used for
      “virtual screening of very large chemical libraries (see for example our
      FightAIDS@Home project).  The purpose of the tangible interactions is
      quite different.  The tangible models have perceptual advantages over
      computer generated imagery, and allow “playful” exploration for quick
      “what if” type of questions.  We are exploring the augmented reality
      tangibles as novel computer interfaces both for research applications
      and for educational applications.

    2. I don’t know why, but this argument is giving me flashbacks to the program meant to figure out how to get the sofa up the stairs in Dirk Gently’s Holistic Detective Agency.

  3. It’s a little bit of both.  The search space for docking of two rigid bodies is unexplorably huge.  How huge?  This huge ( I’m 90% sure this was a docking calculations). 
    The systematic optimization algorithms work like dumb jigsaw puzzle players.  Choose a random orientation, press to fit, if no, rotate slightly, try again.  Move it slightly, try again.  If you could allow a human to direct the process however (even just by giving a good initial guess), then the search has a much better chance of succeeding.  This interface would be a reasonable way to do that.

    The  algorithms rapidly become worse and worse the more realistic you make this problem. If you let one molecule be slightly flexible, as they are in real life, human intuition can easily outperform the existing algorithms. (see recent press surrounding FoldIt) .

Comments are closed.