A glove equipped with networks of sensors that can learn to identify individual objects, estimate weights and use tactile feedback while manipulating objects is reported in a paper published this week in Nature. This strategy may aid the future design of prosthetics, robotic tools and human-robot interactions.
Humans can grasp and feel objects while simultaneously applying the correct amount of force. Such sensory feedback is challenging to engineer in robots. In recent years, computer-vision-based grasping strategies have progressed with the help of emerging machine learning tools. However, platforms that rely on tactile information are lacking.
Subramanian Sundaram and colleagues designed a simple, low-cost (US$10) scalable tactile glove covering the full hand with 548 sensors and 64 conducting thread electrodes. The sensor array consists of a force-sensitive film, addressed by a network of conducting threads. Each point of overlap between the electrodes and the film was sensitive to perpendicular forces and measured the electrical resistance through the film. The authors recorded a large-scale dataset of tactile maps by wearing the glove while manipulating objects with a single hand. The dataset included spatial correlations and correspondence between finger regions, which represent the tactile signatures of the human grasp.
Using the glove, the authors recorded tactile videos while interacting with a set of 26 objects with a single hand for more than five hours. They then trained a deep learning network to identify these images using the recorded data, and found that it was able to identify various objects from the way they were held.