Sally from New Scientist writes, "The first few months after I moved to London from New York were like a Henry James novel of social horrors. Cultural differences that had seemed so subtle from across the Atlantic were staggering up close. I couldn't go five minutes without committing some disastrous social gaffe. Why did my conversations always end in mortified silence? What was I doing wrong? Why wouldn't anyone help me? Naturally, as a tech journalist, I started looking for technologies that would solve my problem. The good news is that these do exist–augmented reality applications are coming that can help you decipher the emotional cues of the people you're talking to. The bad news is that when this stuff gets rolled out in the workplace–which it now is–unintentional oversharing might the problem worse. This technology will be a primer in the law of unintended consequences."
When Picard and el Kaliouby were calibrating their prototype, they were surprised to find that the average person only managed to interpret, correctly, 54 per cent of Baron-Cohen's expressions on real, non-acted faces. This suggested to them that most people – not just those with autism – could use some help sensing the mood of people they are talking to. "People are just not that good at it," says Picard. The software, by contrast, correctly identifies 64 per cent of the expressions…
Picard says the software amplifies the cues we already volunteer, and does not extract information that a person is unwilling to share. It is certainly not a foolproof lie detector. When I interviewed Picard, I deliberately tried to look confused, and to some extent it worked. Still, it's hard to fool the machine for long. As soon as I became engaged in the conversation, my concentration broke and my true feelings revealed themselves again.