Home Bots & Brains Researchers Simplify Robot Control

Researchers Simplify Robot Control

by Pieter Werner

Researchers from the Massachusetts Institute of Technology (MIT) and Stanford University have developed a new machine learning technique that significantly simplifies the control of robots, such as drones and autonomous vehicles, leading to improved performance in dynamic environments where conditions can change rapidly.

For instance, with this technique, an autonomous vehicle could learn to handle slippery road conditions, a robot could maneuver different objects in space, or a drone could precisely follow a downhill skier, even amidst strong winds.

Navid Azizan, assistant professor at MIT, explains that the technique goes beyond merely learning the dynamics of the system. “We’re also learning about the unique structures that contribute to effective management and control. This allows us to create controllers that perform much better in the real world.”

The novel aspect of this technique is that it can directly extract an efficient controller from the learned model, unlike other machine learning methods which require a controller to be derived or learned separately. This makes their approach not only simpler but also faster, as it requires fewer data than other methods.

Spencer M. Richards, a graduate student at Stanford University and the lead author of the research, explains that their approach is inspired by how roboticists use physics to derive simpler models for robots.

The researchers have succeeded in using machine learning to learn a dynamic model that is useful for the management and control of the system. From this model, a controller can be directly extracted, eliminating the need for a completely separate model for the controller.

In tests, their controller closely followed desired trajectories and outperformed all comparable methods. Additionally, their method proved to be efficient with data, achieving high performance even with limited data. This could be especially useful in situations where a drone or robot needs to learn quickly in rapidly changing conditions.

This approach is broadly applicable and can be used for many types of dynamic systems, from robot arms to spacecraft operating in low-gravity environments. In the future, the researchers hope to develop models that can identify specific information about a dynamic system, leading to even better performing controllers.

This research, supported by the NASA University Leadership Initiative and the Natural Sciences and Engineering Research Council of Canada, will be presented at the International Conference on Machine Learning (ICML).

Misschien vind je deze berichten ook interessant