In the world of prosthetics, we’re still at the stage where a person has to instruct the prosthetic to first do one thing, then another, then another. As University of Waterloo Ph.D. researcher Brokoslaw Laschowski puts it, “Every time you want to perform a new locomotor activity, you have to stop, take out your smartphone and select the desired mode.”
But Laschowski and his fellow researchers have been developing a device that uses wearable cameras and deep learning to figure out the task that the exoskeleton-wearing person is engaged in, perhaps walking down a flight of stairs, or along a street, and gets them there, a bit like programming a destination in a self-driving car.
A @RadioSpectrum1 conversation. Available on Spotify and @IEEESpectrum.