Home Bots & Brains Humanoids in Orbit: ESA Explores the Future of Robot-Assisted Space Missions

Humanoids in Orbit: ESA Explores the Future of Robot-Assisted Space Missions

by Marco van der Hoeven

At the Humanoids Summit in London, Thomas Krueger, Team Lead of the Human Robot Interaction Lab at the European Space Agency (ESA), discussed the realities and prospects of humanoid robotics in space. While much of the public imagination may lean toward fully autonomous robot explorers, Krueger’s presentation emphasized the nuanced, often underestimated role of teleoperation and hybrid autonomy in near- and deep-space missions.

Krueger opened with a brief history of robots in space, beginning with the Soviet Lunokhod, a remote-controlled rover deployed to the Moon in 1970. Since then, robotic systems like the Canadarm on the International Space Station (ISS), NASA’s Mars rovers, and even humanoid torso experiments like Robonaut have demonstrated both the potential and the constraints of robotic systems beyond Earth.

However, what links these systems is not full autonomy but varying degrees of remote operation. And in the case of deep space, distance—and with it, latency—becomes the critical constraint.

Latency

As Krueger explained, latency is the defining challenge for real-time robot control in space. While operating a robot on the ISS via direct Earth communication introduces a manageable delay of under a second, controlling one on Mars means dealing with round-trip latencies of up to 40 minutes, depending on planetary alignment.

This technological bottleneck renders direct teleoperation impractical in many space scenarios. As a result, ESA and its partners are experimenting with a layered approach: direct teleoperation where latency is low, supervised autonomy where feasible, and full autonomy only where absolutely necessary—or possible.

Testing the Chain from Orbit to Surface

To explore this hybrid model, ESA has developed several demonstrators. One of the most notable is a setup where astronauts use haptic controllers aboard a simulated spacecraft to operate a ground-based humanoid robot. The project tested how astronauts perceive haptic feedback in microgravity and how latency impacts performance. Krueger noted that astronauts tend to lose sensitivity in space, which complicates precision tasks unless the system is finely tuned.

The team also conducted a high-profile test on Mount Etna, simulating a lunar or Martian geology mission. While the astronaut was actually in a nearby hotel rather than on the ISS, the experiment successfully demonstrated that remote human-robot collaboration can work under field conditions, even with some latency.

Building Towards Autonomy

While full autonomy remains the end goal, Krueger made a case for teleoperation not as a fallback, but as a vital bridge. “Autonomy will fail or be incomplete,” he noted, “and a bit of teleoperation in the mix could get the job done.” This hybrid model is particularly attractive in scenarios where infrastructure, bandwidth, or safety concerns make human presence impossible.

Moreover, the same teleoperation datasets can serve as training inputs for machine learning systems—creating a feedback loop where human guidance improves robotic independence over time.

Humanoids

A recurring question during the session—and in the broader humanoid robotics community—is whether humanoid form factors are necessary or even advantageous in space. Krueger offered a pragmatic perspective: humanoid features are useful for “station keeping,” where robots must manipulate human-designed interfaces and tools. In such environments, arms and hands designed to mimic human movement may reduce the need to redesign infrastructure.

However, for off-world construction or mining—tasks that don’t rely on existing human-centric designs—non-humanoid systems may be more efficient. “If you want to build a canal, you can have ten humanoids with a shovel, but it would be smarter to use an autonomous excavator,” he remarked, emphasizing the need for “smart trade-offs” rather than ideology-driven design.

Modular Control, Global Data Spaces

ESA’s technical architecture reflects this modular, flexible approach. Instead of relying on traditional robotics platforms like ROS, the agency uses a Data Distribution Service (DDS) to manage sensor integration and communications across the robot and its operators—whether they’re on Earth, in orbit, or eventually on the Moon or Mars.

Krueger also addressed the practical hardware limitations of space robotics. Due to radiation exposure, advanced chips and GPUs often used in AI are not suitable for space deployment. One workaround could be hosting processing-intensive tasks on a radiation-shielded data center within an orbiting spacecraft, effectively turning the craft into a localized AI command hub.

The discussion concluded with a look at the shifting business models in space robotics. While early use cases are almost exclusively driven by public sector goals—science, safety, and exploration—the growing role of private space stations and commercial lunar ventures may open up more demand-driven applications. Krueger compared this to the evolution of computing in the 1960s, transitioning from government-led projects to commercial and consumer adoption. “There’s a future,” he said, “where robotics in space goes the same way.”

 

Misschien vind je deze berichten ook interessant