In his CES 2026 keynote, NVIDIA CEO Jensen Huang argued that robotics is moving into a new phase as AI expands from software that lives on screens to systems that can act in the real world. He repeatedly framed robots, autonomous vehicles, and industrial automation as part of the same transition: “physical AI,” built on simulation, synthetic data, and edge computing.
Huang described physical AI as a step beyond language and vision models: systems that can handle the “common sense” rules of the world—how objects move, collide, fall, or persist when out of view. “The question is how do you take something that is intelligent inside a computer … to something that can interact with the world,” he said, adding that robots need to learn concepts like object permanence, causality, friction, gravity, and inertia.
A robotics stack built around training, inference, and simulation
A central theme of Huang’s robotics message was that building capable machines requires more than a single model. He described a three-part computing structure for physical AI: training systems to build models, inference systems to run them on machines, and simulation systems to test behavior safely and repeatedly.
“This basic system requires three computers,” Huang said. One is used “for training the AI models.” A second is used “to inference the models,” which he called “essentially a robotics computer that runs in a car or runs in a robot.” The third is “designed for simulation,” which he described as foundational to NVIDIA’s approach: “Simulation is at the heart of almost everything Nvidia does.”
In that framework, Huang positioned NVIDIA Omniverse as its physically based digital-twin environment, and Isaac Sim / Isaac Lab as its robotics simulation tools. After a demonstration segment featuring robots on stage, he summed up the role of simulation bluntly: “That’s how you learn to be a robot.”
“Compute into data”: why NVIDIA is betting on synthetic training worlds
Huang emphasized that real-world data collection is a bottleneck for robotics because it is slow, costly, and fails to cover the range of situations machines must handle. His answer is synthetic data generation grounded in physics, created in simulation and scaled using generative models.
“The physical world is diverse and unpredictable. Collecting real world training data is slow and costly and it’s never enough,” a narrated segment in the keynote stated. Huang’s pitch was that NVIDIA’s world model, Cosmos, is designed to address that gap by generating physically plausible scenarios at scale: “Cosmos turns compute into data.”
He described Cosmos as a “world foundation model” intended for physical AI, able to align language, images, 3D, and action, and to support tasks such as reasoning and trajectory prediction. “The chat GPT moment for physical AI is nearly here,” the keynote narration said, before pointing to synthetic data as the practical route to reach it.
From cars to humanoids: one approach for many robot types
Huang used autonomous vehicles as a flagship example of physical AI, presenting them as a major robotics market and an entry point to broader machine autonomy. Later, he generalized the approach beyond cars, saying the same simulation-and-synthetic-data workflow applies across robotics categories.
“This basic technique … applies to every form of robotic systems,” he said, listing examples that ranged from “an articulator, a manipulator” to “a mobile robot” and “a fully humanoid robot.”
He also referenced NVIDIA’s humanoid robotics efforts directly, naming Groot as a robotics model focused on “articulation mobility locomotion.” In a broader list of robotics activity around NVIDIA’s ecosystem, he pointed to a wide variety of robot makers and categories on the CES floor, including industrial and service robots, delivery robots, surgical robots, and collaborative manipulators.
Factories as robots, and robotics as an industrial interface
Huang also connected robotics to industrial digitization, describing a future where factories and production lines are designed, simulated, and optimized digitally before they are built. In that context, he cast manufacturing systems themselves as robotic-scale machines.
“We have to build the plants, the factories that make manufacture you … And these manufacturing plants are going to be essentially gigantic robots,” he said, during a segment introducing deeper work with industrial software and digital-twin tooling.
The keynote’s Siemens partnership segment reinforced that point, linking physical AI to automation needs driven by labor constraints. “As the global labor shortage worsens, we need automation powered by physical AI and robotics more than ever,” the narrated video said.
The implication: robotics becomes a platform, not a product category
Across the robotics sections of the keynote, Huang’s argument was consistent: the next stage of robotics depends on treating physical AI as a full platform stack—simulation, data generation, training, and edge inference—rather than isolated robots built case by case. In Huang’s words, it is not only about the machines themselves, but the infrastructure that gets them there: “It’s not just about the robots in the end … It’s about getting there.”
