This self-walking exoskeleton infers your destination and takes you there

While researchers have made great strides (sorry) in robotic exoskeletons that can help people with mobility challenges, the wearer still needs to control the prosthetic. "Every time you want to perform a new locomotor activity, you have to stop, take out your smartphone and select the desired mode," explains University of Waterloo Ph.D. researcher Brokoslaw Laschowski. Now, he and his colleagues are developing a ExoNet, a system of wearable cameras and deep learning technology that infers where the wearer wants to go and determines what steps the exoskeleton should take to get them there. From IEEE Spectrum:

Steven Cherry (IEEE Spectrum): A press release compares your system to autonomous cars, but I would like to think it's more like the Jean-Paul Sartre example, where instead of making micro-decisions, the only decisions a person wearing a robotic exoskeleton has to make are at the level of perception and intention. How do you think of it?

Brokoslaw Laschowski Yeah, I think that's a fair comparison. Right now, we rely on the user for communicating their intent to these robotic devices. It is my contention that there is a certain level of cognitive demand and inconvenience associated with that. So by developing autonomous systems that can sense and decide for themselves, hopefully we can lessen that cognitive burden where it's essentially controlling itself. So in some ways, it's similar to the idea of an autonomous vehicle, but not quite[…]

Steven Cherry Well, I personally would be able to use one of these devices only if the data set can discern the intent when I change my mind, you know, like, "oh, I forgot my keys" and go back in the house, which I personally do about, you know, maybe 10 times a day.

Brokoslaw Laschowski Right now, there is a little bit of a difference between recognizing the environment and recognizing the user's intent. They're related, but they're not necessarily the same thing. So, Steven, you can imagine your eyes as you're walking. They are able to sense a car as you're walking towards the car. There is … If you're standing on the outside, you can imagine that as you get closer and closer to that car, this might infer that you want to get in the car. But not necessarily. Just because you see, it doesn't necessarily mean that you want to go and pursue that thing.

This is kind of the same case in—and this kind of comes back to your opening statement—this is kind of the same thing in walking where, as somebody is approaching a staircase, for example, our system is able to sense and classify those staircases, but that doesn't necessarily mean that the user then wants to climb those stairs. But there is a possibility. And as you get closer to that staircase, the probability of climbing the stairs increases. The good thing is that we want to use what's known as multi-sensor data fusion, where we're combining the predictions from the camera system with the sensors that are on board. And the fusion of these sensors will be able to give us a more complete understanding as to what the user is currently doing and what they might want to do in the next step.

Listen to the interview:

image: University of Waterloo/Mobile Research Group