Researchers revealed today that they've demonstrated how an A.I. can essentially "read minds" by analyzing fMRI brain scans and translating that information into words reflecting the private thoughts of the human subject. The University of Texas at Austin neuroscientists published their astonishing results in the journal Nature Neuroscience. They trained the A.I. with measurements taken of the subjects' brain activity as they listened to narrative podcasts. The A.I. then learned to match those patterns with particular words and phrases from the podcast scripts. From the New York Times:
In the study, it was able to turn a person's imagined speech into actual speech and, when subjects were shown silent films, it could generate relatively accurate descriptions of what was happening onscreen.
"This isn't just a language stimulus," said Alexander Huth, a neuroscientist at the university who helped lead the research. "We're getting at meaning, something about the idea of what's happening. And the fact that that's possible is very exciting." […]
This language-decoding method had limitations, Dr. Huth and his colleagues noted. For one, fMRI scanners are bulky and expensive. Moreover, training the model is a long, tedious process, and to be effective it must be done on individuals. When the researchers tried to use a decoder trained on one person to read the brain activity of another, it failed, suggesting that every brain has unique ways of representing meaning.