Researchers at Universidad Carlos III de Madrid (UC3M) have developed a methodology that enables a robot to learn autonomous arm movements by combining observational learning with intercommunication between its limbs. The system was presented at IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2025 and is designed to improve the ability of service robots to perform assistive tasks in domestic environments.
The research focuses on coordinating two robotic arms to operate simultaneously, a challenge in robotics due to the need for synchronized movement and collision avoidance. The UC3M team implemented the approach on ADAM (Autonomous Domestic Ambidextrous Manipulator), a robot developed to assist elderly individuals in homes or care facilities. The platform is capable of carrying out tasks such as setting and clearing a table, organizing kitchen items, delivering water or medication at scheduled times, and handing over clothing.
According to Alicia Mora, a researcher in the Mobile Robots Group at the UC3M Robotics Lab, the robot is intended to support users with routine activities that may otherwise require assistance. Ramón Barber, director of the Mobile Robots Group and professor in the university’s Department of Systems Engineering and Automation, said the primary objective is to provide help with everyday actions that can represent meaningful support for individuals with limited mobility.
The methodology presented by researchers Adrián Prados and Gonzalo Espinoza proposes training each robotic arm independently through imitation learning, allowing the robot to observe and replicate human demonstrations. After individual training, the arms coordinate using a mathematical framework known as Gaussian Belief Propagation. This system enables real-time information exchange between the limbs, allowing them to adjust movements dynamically and avoid collisions with each other or with obstacles without interrupting operation.
Imitation learning allows a robot to acquire tasks by observing human actions rather than relying on manually programmed instructions. However, direct replication of recorded movements can limit adaptability if environmental conditions change. The UC3M approach addresses this by enabling movement trajectories to adjust smoothly when object positions vary. For example, if a bottle is relocated, the robot can modify its path while maintaining essential constraints such as keeping the container upright.
The robot’s operation is structured in three stages: perception, reasoning, and action. During perception, sensors collect environmental data. ADAM uses two-dimensional and three-dimensional laser sensors to measure distances and detect obstacles, along with RGB cameras equipped with depth perception to generate three-dimensional models of its surroundings. In the reasoning stage, the system processes this data to identify relevant information. The action phase involves executing movements, including coordinating both arms or navigating its base.
Researchers are also working to enhance the robot’s contextual understanding. Alberto Méndez, a member of the Mobile Robots Group, is developing systems that incorporate generative models and artificial intelligence to allow the robot to adapt its behavior according to specific situations rather than relying solely on predefined knowledge bases.
ADAM currently serves as an experimental platform with an estimated cost of between €80,000 and €100,000. The research team indicates that further technological development could reduce costs over time, with projections suggesting that similar robots could become more accessible for domestic use within 10 to 15 years.
The project is positioned within broader efforts to address demographic changes associated with population aging. The researchers state that assistive robotics may contribute to supporting elderly individuals as the proportion of older adults increases and the availability of caregivers declines.
Photo credit: UC3M
