We humans are social animals, so we take our behavioral cues from people around us. A new bit of research suggests we'll take those cues from robots, too.
In their new paper entitled "The Robot Made Me Do It" (paywalled here), a group of scientists described an experiment they conducted, where subjects were asked to push the spacebar on a keyboard to inflate a balloon on-screen. Every push of the spacebar earned them one penny, but each push also had the random possibility of exploding the balloon. They could stop at any point in the game and cash in their winnings.
So basically, it's a game of risk tolerance. With each button-push you gauge your tolerance for risk versus reward.
Then they introduced a little robot into some of the games. One group of participants played the game alone. Another group played with a "Pepper" robot standing next to them, silent. And a third group played with the robot providing game instructions — and egging them on, by asking "Why did you stop pumping?"
The results showed that the group who were encouraged by the robot took more risks, blowing up their balloons significantly more frequently than those in the other groups did. They also earned more money overall. There was no significant difference in the behaviours of the students accompanied by the silent robot and those with no robot.
Dr Hanoch said: "We saw participants in the control condition scale back their risk-taking behaviour following a balloon explosion, whereas those in the experimental condition continued to take as much risk as before. So, receiving direct encouragement from a risk-promoting robot seemed to override participants' direct experiences and instincts."
This has implications, the researchers note, for our relationship with all the chatty forms of AI that are talking to us all the time:
Dr Hanoch concluded, "With the wide spread of AI technology and its interactions with humans, this is an area that needs urgent attention from the research community."
"On the one hand, our results might raise alarms about the prospect of robots causing harm by increasing risky behavior. On the other hand, our data points to the possibility of using robots and AI in preventive programs, such as anti-smoking campaigns in schools, and with hard to reach populations, such as addicts."
(Photo of the Pepper robot experiment via the University of Southhampton)