Home Bots & Brains Robot Can Grab Objects from Human Hands

Robot Can Grab Objects from Human Hands

by Pieter Werner

Researchers have developed a new algorithm that allows robots to grasp objects directly from human hands. This development, named Robot Grasping Reinforcement Learning (RGRL), addresses the growing need for robots to interact more seamlessly and safely in environments alongside humans.

As artificial intelligence, sensor technology, and robot control systems have evolved, there has been a noticeable shift from machine-centered to human-centered collaboration, especially in complex scenarios. Traditional industrial and service robots are designed to avoid human contact for safety reasons. However, this approach limits their ability to perform tasks that require direct interaction with humans, such as grasping objects from a person’s hand.

RGRL aims to solve the problem of robots safely grasping objects from a user’s hand. This task requires a high degree of precision and safety considerations to prevent harm to the human user. Unlike the typical task of grasping stationary objects, retrieving items from a person’s hand involves complex recognition, positioning, and real-time adjustments based on the user’s actions.

How RGRL Works

Deep reinforcement learning forms the backbone of RGRL, allowing the robot to actively learn the best strategies for picking up objects from a user’s hand without physical contact. One of the notable challenges in reinforcement learning is its traditionally low sample utilization and slow convergence, often requiring thousands of trial-and-error iterations, which can be costly and time-consuming.

To overcome these challenges, RGRL employs a unique approach. The algorithm first simulates a robot grasping an object from a user in a virtual environment, where tens of thousands of learning sessions are performed. This simulation uses domain randomization to bridge the gap between simulated and real scenarios. Additionally, a multi-objective reward function is implemented to effectively accelerate the convergence of the reinforcement learning algorithm.

Practical Application

RGRL’s practicality is evident in its low computational and hardware requirements. The use of domain randomization in the training phase eliminates the need for manual data labeling. The algorithm requires only a 3D model of the object to be imported into the simulation software, after which it can automatically learn the appropriate motion path. The only hardware addition needed is the Leap Motion sensor, which meets the conditions for running the algorithm.

The algorithm has been evaluated in both simulated and real-world scenarios, demonstrating its effectiveness and potential for broad application in various industries. This development not only enhances the capabilities of robots in terms of safety and efficiency but also opens new avenues for human-robot interaction in everyday life.

Misschien vind je deze berichten ook interessant