Brain scans reveal our mind movies?

UC Berkeley researchers used brain scans of the visual cortex and computational models to reconstruct what the individual is seeing. From UC Berkeley:
As yet, the technology can only reconstruct movie clips people have already viewed. However, the breakthrough paves the way for reproducing the movies inside our heads that no one else sees, such as dreams and memories, according to researchers.

“This is a major leap toward reconstructing internal imagery,” said Professor Jack Gallant, a UC Berkeley neuroscientist and coauthor of the study published online today (Sept. 22) in the journal Current Biology. “We are opening a window into the movies in our minds.”

Scientists use brain imaging to reveal the movies in our mind

BB pal Jim Leftwich points out that the reconstructed video looks strikingly similar to how images from a science fiction "dream recorder" were represented in Wim Wenders' captivating 1991 film Until The End Of the World. Here's a frame grab Jim made from his VHS tape of the movie.
Uteotw Clip 4 970X548


  1. While it’s cool that reality has finally caught up with Wim Wenders brain scan movies, I’d probably trade this for software as cool as detective Winter’s Nintendo-style people tracking software or the Russian detective’s Bounty Bear program (which ran on “Vietnamese chips”).

    …just a minute… I’m searching… searching…

  2. think of how this will affect relationship dynamics. Over the long-term I can see benefit in this kind of transparency but damn will it F some peoples ideals of “love” and “sanctity”. Whatever those things mean, anyway, they’re sure to change.

  3. Am I the only one that sees a catch-22 with this?

    If you can only scan for things that have been known to have been seen, then does that mean they are using the original source of video/picture to help recreate the neural image?

    In a way it’s like the chicken and the egg.  I can understand if they need to calibrate the hardware/software to your brain, but after a few dozen tests shouldn’t it work by itself?

    1. Yes, they say clearly they ARE using original pictures to recreate this images- a giant database of frames from youtube.

      I don’t really know what’s going on here, but it’s not showing a picture of what’s inside the brain. It’s matching fMRI data with youtube frames.

      Anyone know wtf this is?

      1. Based on the Berkeley press release I think the way it works is this. First, they showed a few movie trailers to a person while scanning their brain, then the computer program was fed both the visual information from the trailers and the information about the brain activity, and it tried to figure out the correlations between them. Then based on that, the computer made predictions about what the brain activity might look like for a huge set of brief segments of youtube videos it has in its archive (I think each segment was only 1 second long although the wording wasn’t totally clear in the article)–these youtube clips weren’t actually shown to the person. After that, they then showed the person a different set of movie trailers, and this time the computer had no access to visual information from the trailers themselves, it was only given information about the person’s brain activity while watching them. For each brief segment of brain activity (1 second if I was reading it correctly), the computer compared the real activity during this time to the simulated brain activity it had predicted for all the youtube clips. Based on this, for each short segment it picked the 100 youtube clips where the simulated brain activity best matched what was actually going on in the person’s brain when they watched the second set of movie trailers, and averaged those 100 youtube clips together to get the final result.

      2. As best I can tell:

        1. People shown clip.
        2. Scan taken.
        3. Frame-by frame scan compared to database of youtube clips that DOES NOT include the clip show prior to the scan.
        4. Tech uses best-match frames to attempt to reconstruct original clip.

        1. That’s a little off, as I said in an earlier comment it’s more like:

          1. People shown movie clips while brain scan is being taken
          2. Info about both clip and scan is fed into computer
          3. Computer figures out correlations between the two, and uses that to predict what brain activity might look like for database of youtube clips that weren’t actually shown to people
          4. People shown different movie clips while another brain scan is being taken
          5. Computer is fed info from second brain scan (not from second set of clips)
          6. Computer finds best match between second brain scan and predicted brain activity in youtube clips, averages together best-match youtube clips to get final result

  4. I’ve been obsessed with this since being haunted by that Wim Wenders movie when I was a teen. I don’t see some of the Orewllian bad news in this some of my pals do. And I think it’s a huge, amazing step towards the goal of interpreting brain activity, but this is pretty rudimentary: they’re matching the MRI info of image-calibrated brains to existing footage. It isn’t strict visual information, just impulses related to already-seen information. Which is still amazing, but I’m impatient for the real goal. I want to get to SEE my dreams again.

    1. Abbie, there’s two versions of Until the End of the World by Wim Wenders. The regular cut on 1-DVD, and an epic several-hour long version on 3-DVDs. Both are great. Both only exist as imports, and the long version is missing English subtitles for the moments when non-English is spoken.

      For many years, people hoped that the film company Criterion would license and release the long version on DVD in the United States. Then they hoped they’d release it on BluRay. Then they hoped other companies would. Then they gave up. Wim Wenders gets no respect in the US.

  5. I remember an old SF story….William Gibson maybe?…that involved the ability to record the music in people’s minds and thus the best “musicians” didn’t even have to be able to play an instrument.  There was a passage in there along the lines of, “imagine all the people in the history of the world that made wonderful music in their mind, but lacked the technology to extract it, and all the beautiful music the world will never hear because of it.”

    Or something like that.

    1. At Northwestern they *are* working on capturing the audio signal from inside the brainstem, by the way.  They can play back clips based on neural activity.

      It seems much more focused and direct than this stuff.  The big mystery is that they are finding as much top-down action as bottom-up, even in what would seem to be fairly passive audition.

  6. So… if neither Netflix nor Apple is renting a copy of Until the End of the World, where does one rent a dvd these days? Sounds like a legitimate case for finding a copy on the torrents. 

    1. Duncan Creamer wrote “So… if neither Netflix nor Apple is renting a copy of Until the End of the World, where does one rent a dvd these days?”

      How about a good library?

  7. Does anyone know if the clip above represents the entirety of the presented and reconstructed clips?  If this is unedited, it’s pretty spectacular.  If it’s edited I don’t see how this is much different from the researchers giving themselves a Rorschach test or watching the Wizard of Oz to the Dark Side of the Moon :)

  8. I’m tweaking on this story. I’m a visual artist and I can tell you that my creative process involves me recreating the fully realized visions of paintings that I dream. These mental images are very clear to me. If it is now possible to see those images before I paint them…well…holy eff.

    I have also just had a wave of terror wash over me thinking about the possibilty of seeing our dreams, as one YouTube commenter has postulated about this technology. I can’t imagine how seeing the sometimes horrific/random images that we do in our sleep would be anything but psychologically scarring. Call me a luddite, but I don’t want to see my psyche, thanks very much.

    Apologies for the double-post, but I’m bowled over by this.Thanks David, for the link and the nightmares.

  9. I haven’t read the research in detail, but here is quick description of the paper. 3 people each viewed 2 hours of movies while in an fMRI. The researchers then modeled the changes in the fMRI to the movies and tested the model by showing the subjects 9 minutes of new movies (not in the first 2 hours). The images on the right are the result of averaging the movies which produced the most similar fMRI signals.
    Its a interesting idea, but their claim that this could be used for decoded dreams or halucinations is very far fetched.

  10. When cutting edge AI is more about blowing people to smithereens, does anyone really think this will be used for anything remotely humane before we get dream recorders?

  11. Retinal information projects onto visual cortex, from which this information is inferred (using fMRI).

    Dreams likely influence primary visual cortex, but are not similar to retinal projections. They have components from across many parts of the brain, and aren’t played like a movie onto visual cortex.

    A better way to record dreams would be to induce people to speak while dreaming.

    1. Well, all experiences have “components from across many parts of the brain”, and dreams have plenty of subjective elements that aren’t visual–but if we’re interested in the purely visual aspects of dreams, is the brain activity in the visual cortex noticeably different from that of a waking experience?

      1. Yes. Although they can’t be compared directly, all indications are that visualization results in very different patterns of activity in the visual cortex compared to visual stimuli.

        1. But when you say “visualization”, are you talking about waking visualization? When I try to visualize something in my imagination it’s not remotely as vivid and detailed as a real object in my visual perception, whereas dream objects seem to come pretty close (I sometimes try to investigate this in lucid dreams, the level of detail in things like tree branches really is pretty amazing). Are there studies that show specifically that activity in the visual cortex during dreams is very different from activity during waking perception?

          1. I don’t know of any neuroimaging with respect to dream imagery. Of course the difficulties of gathering data about brain function and the details of the percieved imagery are problematic, and will be problematic in any attempt to correlate brain activity with perception during dreaming. So it is not possible to entirely discount or confirm the possibility that activity during dream imagery is not strongly correlated with visual cortex activity. Expert lucid dreamers are hard to come by, and harder to certify.

            Wikipedia’s “Cognitive neuroscience of dreams” entry has a good overview of what (little) is known about brain activity related to perception during dreaming:

            But many waking tasks involve visual imagery (e.g. mental rotation). There is some evidence that the vividness of imagery is different for different people for different tasks, and some evidence that “vividness” correlates with activation levels in visual cortex (but not necessarily with the same spatial or temporal organization as a similar visual stimuli).

            Here’s a relavant reference, “Cortical activation evoked by visual mental imagery as measured by fMRI””

  12. Until the End of the World is one of the absolute best freaking movies, import it now, and try to get the long version.  I bought the long version and haven’t watched it yet, I’m waiting for the absolute right 4 hour stretch!  :)  It’s just so good, and yes, oddly synchronistic in terms of the falling satellite.  Well good, dammit, if life is becoming more like Until the End of the World, I’m happy!  :)

  13. While cool, this is not really that amazing of a thing.  This is not reading memory of any sort. It is reading neural activity in the visual cortex, *while* an image is being viewed.  If you know much about the visual cortex, you may recall that there is a very well-defined mapping between the retina & the primary visual cortex.  I’d be amazed if this mapping is not what is driving the process in question.  

  14. Neat. Things I can do with this:

    1. Create pictures and diagrams for presentations.

    2. Create my own movie (starring me, of course).

    3. Show people what I’m talking about in video-chat (yes, live, real-time updates).

  15. So what if they’re only matching fMRI scans to YouTube frames currently? They’re building a library of what scans “look” like.  Once they have a large enough sample database, they can begin synthesizing the playback images to create an original image.

    Think about audio resolution as an analogy, you probably could not discern a synthesized piano, guitar or drum sound from a well recorded multi-sampled version.  There may be a few indicators at the edges (such as synthesizing some guitar techniques like palm muting or different kinds of sliding), but it’s convincing enough most non-musicians won’t notice.

Comments are closed.