Hengbot (heard as “HBO Innovation” in the interview) is showing Sirius, a trainable AI robot dog aimed at consumer human-robot interaction: a compact quadruped that blends a camera “face” display, on-device perception, and behavior control that feels closer to a configurable pet than a fixed animatronic. The pitch is less about one scripted demo and more about a platform where personalities, motion “tricks,” and UI can be tuned over time, with a developer angle via exposed APIs. https://hengbot.com/pages/hengbot-sirius-ai-dog-robot
In the booth demo you see multiple control paths—gesture triggers, voice commands, web control on the local network, and a gamepad/joystick for teleop—plus an autonomous mode meant to react to nearby motion. Hardware cues in the build include a forward camera, a small screen for feedback, capacitive touch on the head “hat,” and expressive ear/pose behaviors that simulate attention, sleep, and “annoyed” states when repeatedly poked, all wrapped into a light indoor form.
From the wider product positioning, Sirius is framed as an edge-AI companion with multimodal interaction and a large prebuilt motion library (useful for animation, HRI research, and “creator” workflows), rather than a pure robotics lab platform. That emphasis shows up at CES Las Vegas 2026 as the conversation shifts toward stability, repeatable recovery after falls, and software iteration—trying to make the robot read as lifelike behavior, not just plastic motion, in a tight feedback loop.
The limitations are also part of the story: about 1–1.5 hours of runtime in the current demo unit, and perception features that are still being upgraded (e.g., edge/table detection to avoid stepping off a surface). Viewers will recognize the classic quadruped stack tradeoffs here—balance control, foot placement, contact sensing, and vision-based scene understanding—where small improvements in state estimation and policy tuning can change the whole “pet” illusion in a room.
Kickstarter fulfillment is described in batches, with the team collecting early backer feedback before a broader retail push, and there’s clear demand for a self-charging dock so the dog can roam and return to power without human help. The most interesting long-term thread is customization: if the API access matures, Sirius could become a programmable embodied agent where behavior, voice, and “character” are modular—useful for education demos, elder companionship experiments, or just testing what people actually want from a home robot on the road.
I’m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e
This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK
Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁
Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY



