CES 2026 opens in Las Vegas today with robotics announcements that largely orbit the same idea: the next wave of robots will be defined less by isolated demos and more by the infrastructure that lets machines perceive, decide, and operate continuously in real environments. Across multiple announcements on day one, the emphasis shifts toward full-stack “physical AI,” depth perception as a prerequisite for autonomy, and platforms meant to scale from consumer devices to industrial robots and full-size humanoids.
AgiBot brings a full humanoid portfolio to its U.S. CES debut
Chinese robotics company AgiBot used CES to mark what it calls its official entry into the U.S. market, presenting a portfolio that spans full-size humanoids, smaller humanoids, industrial systems, and a quadruped line. The company says it has shipped 5,000 robots to date and positions this as evidence it is beyond prototype and pilot stages.
At CES, AgiBot is showing several product families. Its A2 Series is described as full-sized humanoids focused on multimodal interaction and autonomous navigation for guided presentations and showroom-type environments. The X2 Series is framed as a half-size humanoid platform aimed at entertainment plus research and education, with an emphasis on expressive movement and humanlike walking. For industrial environments, the company highlights the G2 Series, described as industrial-grade robots combining interactive intelligence with force-controlled manipulation. AgiBot also lists the D1 Series quadrupeds for inspection and operations in complex environments, as well as “OmniHand,” a dexterous manipulation system intended for its embodied platforms.
AgiBot’s underlying framing is an internal architecture it calls “one robotic body, three intelligences,” tying together interaction, manipulation, and locomotion. The company says multiple humanoids will perform coordinated live demonstrations at its booth in the North Hall.
Qualcomm expands its robotics push with a humanoid-and-AMR compute roadmap
On the enabling-technology side, Qualcomm used CES to introduce what it describes as a next-generation, comprehensive-stack robotics architecture that combines hardware, software, and AI components, targeting everything from household robots to industrial autonomous mobile robots and full-size humanoids. A key product announcement is the Qualcomm Dragonwing IQ10 Series, positioned as a premium-tier robotics processor aimed at advanced AMRs and humanoids.
Qualcomm also points to ecosystem-building as part of the strategy, listing partners that include Advantech, Booster, Figure, Kuka Robotics, Robotec.ai, and VinMotion, among others. The company says it is collaborating with Figure on next-generation compute architecture as Figure scales its humanoid platforms.
On the show floor, Qualcomm says it will display VinMotion’s Motion 2 humanoid powered by the Dragonwing IQ9 Series, alongside other demos and development tooling. It also references teleoperation tooling and an “AI data flywheel” approach for collecting data, training, and deploying skills across different robot form factors.
RealSense outlines 2026 robotics trends with perception as the core constraint
RealSense, known for depth cameras and perception hardware, used CES to publish a five-trend view of where robotics is headed in 2026. The company’s central claim is that visual perception will be the foundation that determines whether autonomy can scale across AMRs, humanoids, and inspection systems operating in unstructured human environments.
The five trends it highlights are: perception as the foundation of physical AI; robots shifting from scripts to missions, including vision-language-action models that execute goals rather than pre-programmed steps; humanoids gaining momentum with viability tied to reliable low-latency vision; autonomy scaling through interoperable ecosystems; and “invisible” automation, where systems operate continuously and blend into operations rather than standing out as special-purpose deployments.
RealSense links these trends to deployments and demos it says are visible at CES, referencing companies such as Unitree, LimX Dynamics, Mobile Industrial Robots (MiR), and Intel Foundry with Boston Dynamics.
Aptiv pitches edge AI as a bridge from vehicles to robots
Aptiv’s CES message centers on “intelligent edge” computing—processing data locally rather than relying on centralized cloud systems—and the idea that technology developed for automotive autonomy can be extended into other safety- and reliability-critical domains, including robotics.
For robotics specifically, Aptiv says its CES pavilion includes demonstrations tied to robotics and aerospace applications. It describes an AI-powered collaborative robot and a next-generation autonomous mobile robot for scalable material handling that integrate Aptiv’s PULSE sensor and compute solutions. The company also references its LINC software platform as part of the embedded, real-time stack behind these kinds of applications.
Realbotix and FUTR connect AI agents to a humanoid-style physical interface
Realbotix announced a pilot partnership with FUTR aimed at bringing AI agents into a physical, interactive form. The companies describe an integration of FUTR’s AI agent platform with Realbotix robotics, with the goal of letting users interact with a personal AI agent through a human-like interface using voice, expression, and movement rather than a purely screen-based experience.
The concept is tied to FUTR’s positioning around privacy-first data management and token-enabled payments, with an initial robotic AI agent pilot expected in the first half of 2026. The partners say they plan to evaluate results after the pilot and consider broader commercial opportunities, including FUTR-branded Realbotix robots and deeper technical integration via APIs.
Agora targets the “AI companion” category with a device kit and real-time interaction stack
Alongside industrial and humanoid narratives, CES day one also included an ecosystem pitch for consumer-facing “AI companions.” Agora presented itself as infrastructure for physical AI devices that need real-time voice, responsiveness, and always-on connectivity. It promotes an out-of-the-box foundation that combines embedded hardware building blocks and core systems for functions such as voice, hearing, vision, and real-time awareness.
Agora highlights a “Convo AI Device Kit R2,” a conversational AI engine designed to handle timing, interruptions, and responsiveness, and an open standard it calls AOSL, described as an open-sourced interface layer intended to reduce fragmentation across chips and operating systems for embedded deployments. It also lists multiple devices that attendees can experience at CES, including reading companions and expressive desktop or companion robots, positioned around interaction rather than task automation.
The day-one pattern: platforms first, robots second
Taken together, CES 2026’s first-day robotics announcements lean heavily toward the layers that sit underneath robot behavior: compute roadmaps, perception stacks, edge processing, and software integration paths that turn a single robot into a repeatable product line. The show-floor robot remains the visible artifact, but the messaging focuses on what companies claim will make robotics dependable enough to deploy at scale—whether the target is a warehouse AMR, a factory cobot, a humanoid platform, or a consumer AI companion.
