Home Bots & Brains Research: sensitive robots

Research: sensitive robots

by Pieter Werner

Several researchers have recently published their findings in providing robots with sensitivity. A Korean research team has developed an electronic skin that can sense tactility just as humans do. And a team of scientists at the Max Planck Institute for Intelligent Systems (MPI-IS) has introduced a soft haptic sensor that uses computer vision and a deep neural network to accurately estimate where objects come into contact with the sensor and how large the applied forces are. They both work to the aim of robots being able to feel their environment as accurately as humans and animals.

In a paper published in Nature Machine Intelligence, a team of scientists at the Max Planck Institute for Intelligent Systems (MPI-IS) introduce a robust soft haptic sensor named “Insight” that uses computer vision and a deep neural network to accurately estimate where objects come into contact with the sensor and how large the applied forces are.

The thumb-shaped sensor is made of a soft shell built around a lightweight stiff skeleton. This skeleton holds up the structure much like bones stabilize the soft finger tissue. The shell is made from an elastomer mixed with dark but reflective aluminum flakes, resulting in an opaque greyish color which prevents any external light finding its way in. Hidden inside this finger-sized cap is a tiny 160-degree fish-eye camera which records colorful images illuminated by a ring of LEDs.

Neural network

When any objects touch the sensor’s shell, the appearance of the color pattern inside the sensor changes. The camera records images many times per second and feeds a deep neural network with this data. The algorithm detects even the smallest change in light in each pixel. Within a fraction of a second, the trained machine-learning model can map out where exactly the finger is contacting an object, determine how strong the forces are and indicate the force direction. The model infers what scientists call a force map: it provides a force vector for every point in the three-dimensional fingertip.

But how does such a sensor learn? Huanbo Sun designed a testbed to generate the training data needed for the machine-learning model to understand the correlation between the change in raw image pixels and the forces applied. The testbed probes the sensor all around its surface and records the true contact force vector together with the camera image inside the sensor. In this way, about 200,000 measurements were generated. It took nearly three weeks to collect the data and another one day to train the machine-learning model. Surviving this long experiment with so many different contact forces helped prove the robustness of Insight’s mechanical design, and tests with a larger probe showed how well the sensing system generalizes.

Skin

Another significant finding has been made by a POSTECH research team, led by Professor Unyong Jeong and Ph.D. candidate Taeyeong Kim (Department of Materials Science and Engineering) in collaboration with Professor Sung-Phil Kim and Ph.D. Candidate Jaehun Kim (Department of Biomedical Engineering) at UNIST has developed an electronic skin that can sense tactility just as humans do.

Conventional electronic skins could only process tactile information by sequential measurement of electric signals coming from the vast number of pixels configured in the sensor. Thus, densely packed pixels took much time to measure, rendering it difficult to create an electronic skin with high spatial resolution that responds immediately to stimuli.

Signal pattern 

The skin’s sensory receptors generate a spike signal spectrum in the form of electric potential in response to an external stimulus and recognizes it by analyzing the signal pattern in the brain. The researchers took a hint from this signal generation and recognition mechanism of the skin’s sensory system to develop an artificial sensory receptor that generates spike signals on its own, and created an electronic skin that can send all signals simultaneously to be analyzed in real time.

Since a biosignal lacks information on location, it is difficult to recognize a dynamic external stimulus in high spatial definition. To overcome this limitation, the researchers presented for the first time that the artificial spike signals can be characterized to contain the position information. Equipped with this functionality, the e-skin can analyze spatial information such as position and motion trace, and temporal information such as speed and dynamic contact area. Since all artificial receptors in an e-skin transmit signals with only one pair of measuring electrodes, the electrode structure has been simplified compared to the conventional e-skins. Applying this technology to an actual robot, the researchers confirmed that the artificial skin reacts to the external stimuli as humans do.

Illustration: (a) Scheme of the artificial receptors generating the position-encoded spike spectrum, which contains the position information, based on the spike signal generation in the human somatosensory system, (b) The newly developed e-skin and wearing it on the finger, c) A robotic arm equipped with e-skin instantaneously detects and grasps a spherical metal ball. Credit: POSTECH

Photo: A thumb-shaped sensor with a camera hidden inside is trained to infer haptic contact information. Credit: Max Planck Institute for Intelligent Systems

 

Misschien vind je deze berichten ook interessant