Chemistry of the future: 3D models and augmented reality


8 Responses to “Chemistry of the future: 3D models and augmented reality”

  1. herrnichte says:

    That’s very convenient and impressive – however it’s not a dramatically different approach to what we were doing with CPK models on the desk in front of (vector!) displays back in the stone age (c1988)

  2. Art says:

    “3D models of augmented reality”-Starring Al Pacino.  HOO-WAAA!

  3. public bizmail says:

    Why is this a good idea? Or is it just a good promo for fund raising?

    If you have enough data to  perform the 3-D  printing,  you have enough data to render  the molecules in silico. Once  rendered in In silico, the computer can algorithmically  do a much more exhaustive search than a human, and without the stubby fingers getting in the way.  Nor is the analysis confined to the resolution of a web cam.

    Does this lab have any successes with this method that outperformed the obvious virtual approach?

    • arthurolson says:

       Of course you are correct.  The same program that we use to compute the
      interaction energy of the manipulated tangible models is used for
      automated computational docking.  Our open source AutoDock code has been
      downloaded by over 30,000 laboratories world-wide and is used for
      “virtual screening of very large chemical libraries (see for example our
      FightAIDS@Home project).  The purpose of the tangible interactions is
      quite different.  The tangible models have perceptual advantages over
      computer generated imagery, and allow “playful” exploration for quick
      “what if” type of questions.  We are exploring the augmented reality
      tangibles as novel computer interfaces both for research applications
      and for educational applications.

    • Restless says:

      I don’t know why, but this argument is giving me flashbacks to the program meant to figure out how to get the sofa up the stairs in Dirk Gently’s Holistic Detective Agency.

  4. Alan Chen says:

    It’s a little bit of both.  The search space for docking of two rigid bodies is unexplorably huge.  How huge?  This huge ( I’m 90% sure this was a docking calculations). 
    The systematic optimization algorithms work like dumb jigsaw puzzle players.  Choose a random orientation, press to fit, if no, rotate slightly, try again.  Move it slightly, try again.  If you could allow a human to direct the process however (even just by giving a good initial guess), then the search has a much better chance of succeeding.  This interface would be a reasonable way to do that.

    The  algorithms rapidly become worse and worse the more realistic you make this problem. If you let one molecule be slightly flexible, as they are in real life, human intuition can easily outperform the existing algorithms. (see recent press surrounding FoldIt) .

  5. am i the only one that thought this was an augmented reality ball of gummy bears?

Leave a Reply