AGIBOT’s booth demo focuses on three tiers of humanoid robotics: the X2 for expressive motion and HRI, the G2 for industrial manipulation, and the A2 for front-desk style service work, all framed around embodied AI rather than scripted animatronics. The core idea is a reusable perception–planning–control stack that can scale from entertainment gestures to contact-aware manipulation, with an emphasis on fast content authoring for demos and deployment. https://www.agibot.com/products/X2
On the X2, the headline is compact whole-body motion with roughly 25–30 degrees of freedom, enough for head and waist articulation, coordinated arm swings, and stable footwork while it runs choreographed routines. In the interview, new dances are described as being learned from a performer video, converted into a BVH-style motion file on an internal platform, then retargeted to the robot’s kinematic chain via inverse kinematics, timing alignment, and balance constraints so it can execute repeatably today.
The G2 segment shifts from showmanship to production constraints: precise joints, force/impedance control, and vision-driven action selection for tasks like wiping a glass panel, pick-and-place, and packing. The team says it is already deployed in China, starting as a 10-unit MVP and aiming to expand to about 120 units for a single factory, matching the direction of wheeled-humanoid manipulators optimized for uptime, payload handling, and safer close-proximity work.
A2 is presented as an interactive service robot (nicknamed Luka in the demo) aimed at proactive greeting in hotels, company lobbies, and event reception, mixing motion libraries with conversational AI. The architecture described is hybrid: local locomotion and some vision processing run on-device (a 16-core CPU plus a Jetson-class GPU), while higher-level language/vision models can be cloud-backed when connectivity allows, which is a practical fit for a noisy show floor at CES Las Vegas 2026.
Across all three robots, the interesting engineering thread is how motion libraries, multimodal perception, and safety behaviors (collision detection, compliant arms, recovery balance) are being packaged into reusable skills rather than one-off demos. If AGIBOT keeps tightening the loop between data capture, imitation/RL training, and real-world task validation, the roadmap hinted here—more dexterous manipulation, reliable handoffs like room-card delivery, and broader business scenarios—reads like a credible roadmap.
I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e
This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK
Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁
Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY



