Home Bots & BrainsNew RGB-Based Robotic Grasping Method for Transparent and Reflective Objects

New RGB-Based Robotic Grasping Method for Transparent and Reflective Objects

by Pieter Werner

Researchers at Tokyo University of Science have developed a robotic grasping method designed to improve the handling of objects that are difficult for conventional 3D measurement systems to detect, including transparent plastics, glass and reflective metal items.

The method, called HEAPGrasp, uses images from a single hand-eye RGB camera rather than relying on depth sensors, which can struggle with transparent and highly reflective surfaces. The system combines semantic segmentation with a shape-from-silhouette reconstruction process to estimate the shape and position of objects from multiple viewpoints, allowing a robot to plan and execute grasps based on image-derived contours.

The work was led by Associate Professor Shogo Arai and Ginga Kennis of the Department of Mechanical and Aerospace Engineering at the university. According to the researchers, the approach is intended for material-handling applications in sectors such as manufacturing, logistics and food service, where robots are increasingly used to move parts, packages, ingredients and dishes.

In the system, objects are first separated from the background in RGB images through semantic segmentation. The researchers used DeepLabv3+ with ResNet-50 for this stage. The extracted silhouettes are then processed using shape from silhouette, a reconstruction technique that estimates a 3D volume by intersecting silhouette-based projections captured from different angles. Because the method depends on image outlines rather than depth readings, it is less affected by optical properties such as transparency and reflectivity.

The researchers also introduced a deep learning-based next-pose planning system to reduce the number of camera movements required during measurement. While additional viewpoints can improve reconstruction accuracy, they also add time and computational cost. The planning system is intended to identify camera trajectories that improve measurement efficiency while limiting unnecessary motion.

The team evaluated the method on a real robotic system using 20 scenes containing five objects each. The test scenes included combinations of transparent, opaque and specular objects. According to the reported results, the system achieved a 96% grasping success rate across objects with different optical properties. The researchers said the approach also reduced camera trajectory length by 52% and execution time by 19% compared with a baseline method in which the camera moves around the scene for 3D measurement.

The study was published online in IEEE Robotics and Automation Letters and is scheduled to be presented at the 2026 IEEE International Conference on Robotics and Automation. The researchers said the method could be added to existing robotic systems, with the aim of improving autonomous handling performance in environments where conventional sensing methods face limitations.

Image credit: Associate Professor Shogo Arai from Tokyo University of Science, Japan

Misschien vind je deze berichten ook interessant