Adversarial mind-reading with compromised brain-computer interfaces

"On the Feasibility of Side-Channel Attacks with Brain-Computer Interfaces," a paper presented by UC Berkeley and U Geneva researchers at this summer's Usenix Security, explored the possibility of adversarial mind-reading attacks on gamers and other people using brain-computer interfaces, such as the Emotiv games controller.

The experimenters wanted to know if they could forcefully extract information from your brain by taking control of your system. In the experiment, they flashed images of random numbers and used the automatic brain-response to them to make guesses as which digits were in their subjects' ATM PINs. Another variant was watching the brain activity of subjects while flashing the logo of a bank and making a guess about whether the subject used that bank.

I suppose that over time, an attacker who was able to control the stimulus and measure the response could glean a large amount of private information from a victim, without the victim ever knowing it.

Brain computer interfaces (BCI) are becoming increasingly popular in the gaming and entertainment industries. Consumer-grade BCI devices are available for a few hundred dollars and are used in a variety of applications, such as video games, hands-free keyboards, or as an assistant in relaxation training. There are application stores similar to the ones used for smart phones, where application developers have access to an API to collect data from the BCI devices.

The security risks involved in using consumer-grade BCI devices have never been studied and the impact of malicious software with access to the device is unexplored. We take a first step in studying the security implications of such devices and demonstrate that this upcoming technology could be turned against users to reveal their private and secret information. We use inexpensive electroencephalography (EEG) based BCI devices to test the feasibility of simple, yet effective, attacks. The captured EEG signal could reveal the user’s private informa- tion about, e.g., bank cards, PIN numbers, area of living, the knowledge of the known persons. This is the first attempt to study the security implications of consumer-grade BCI devices. We show that the entropy of the private information is decreased on the average by approximately 15 % - 40 % compared to random guessing attacks.

On the Feasibility of Side-Channel Attacks with Brain-Computer Interfaces


        1. Redundancy in responses is the most reliable information one can know about someone’s mental state.

  1. Just wait for the malware!  Imma develop a BCI malware projector that induces Tourette Syndrome in a lexicon limited to consumer product brand names and sex acts in the host brain.

    Then I’ll charge companies to keep their brands out of the lexicon.


    “Fuck Snickers!” “C**t Staples!”

    1. Or you could go to work for Sony and write a not-a-rootkit to detect people thinking about piracy and have them sing adverts for other Sony products every time they listen to a stolen song on their iPod! Think of the possibilities!

      1. PS I have patented this idea and if you even think about violating it I’ll make you punch you in the nose.

      1. “We show that the entropy of the private information is decreased on the average by approximately 15 % – 40 % compared to random guessing attacks.”

        It’s known-feasible (if these brain interface widgets were incapable of extracting any information, they wouldn’t actually do anything…); but the power of the technique is fairly low compared to even common and legal methods of questioning. Also, because the consumer devices are severely cost-sensitive, a state user could likely get much better results, even without breaking out the bone drill, just by moving up to classier sensors.

        The really interesting thing would be if such interface hardware were to become extremely common and widely accepted. Just as ‘social’ changed the game on privacy by making it extremely trivial to collect information that was never private; but used to be confined to the target’s social circle, this sort of inferential attack could get quite interesting indeed if every malicious flash applet and adware bot is flashing test stimuli at you…

  2. Pictured: an interrogator uses an early prototype of the device in question to determine that the young man at his door is collecting donations for the Coast Guard Youth Auxiliary.

  3. HA, good luck stealing my ATM password. I can’t remember it anyway!
    That’s why I have it written on the back of my hand (my grocery list is on the other one).

  4. does this mean malicious users are more capable of getting usable information through a BCI than application/game programmers?

  5. Can we just admit that waterboarding is only to provide powerful sadists with sexual gratification?

  6. As somebody whose job is to develop BCIs I find this article (more precisely its interpretations by blogs) hugely misleading. They use a process known for decades (P300) with hardware (Emotiv) which is all but adapted for this paradigm since its electrodes are on unsuitable positions. P300 works well with people who trained for it over extended periods of time and it is very easy to thwart its results (for example by clenching your teeth).

    1. In a world where BCI manufacturers all take care not to put their electrodes in “unsuitable” positions and where all consumers educate themselves about security risks and commit to good security practices, this might all be irrelevant. 

      Everything we know about both manufacturers and consumers suggests that this will not be the case.

      If BCI devices become very common, it would only take a small subset of manufacturers and a small subset of users practicing poor security to open a billion-dollar hole.

Comments are closed.