Yaskawa Electric and SoftBank are going to collaborate on the social implementation of what the companies describe as Physical AI. The initiative will combine Yaskawa’s AI-based robotics technologies with SoftBank’s AI-RAN and MEC platforms to develop robots capable of operating in environments with frequent human activity. The companies plan to present the first results of the collaboration at the International Robot Exhibition in Tokyo.
According to both companies, the collaboration is intended to address automation needs in settings where robots must respond to changing conditions, interruptions and tasks that cannot be predetermined. These environments include offices, hospitals, schools and retail facilities. The companies state that Japan’s tightening labor market and the increasing complexity of business operations are contributing to greater demand for automation that can function in these locations.
Yaskawa will contribute its motion control and industrial robotics technologies, including its autonomous robot platform MOTOMAN NEXT, which incorporates AI for on-device decision-making. SoftBank will provide its AI-RAN technology, which integrates AI with radio access networks, and its MEC systems for low-latency processing of data from cameras, sensors and external building systems. By combining these technologies, the companies intend to construct a system that integrates and analyzes environmental data in real time and issues operating instructions to robots from outside the robot itself.
As the first phase of the partnership, the companies have developed a use case for an office-oriented Physical AI robot. This system links Yaskawa’s robots with SoftBank’s MEC-based AI and a virtual building management system. According to the companies, the setup allows the robot to assess building conditions and perform multiple tasks, including retrieving specific items and responding to unexpected events. The system is structured around a building management platform, a MEC-based AI system that generates task instructions, and an on-robot AI that converts these instructions into physical actions. SoftBank developed the MEC-based vision-language model that issues task-level instructions, while Yaskawa developed the on-robot vision-language action model that determines the robot’s movements.
See also
