Advantech uses this demo to frame humanoid robotics as a perception and compute problem: the robot needs to fuse multiple camera feeds, depth data, and scene understanding fast enough to act in real time. At the center is the AFE-A702, a Jetson Thor based robotic control system built for high-bandwidth sensor input, AI inference, and robot I/O, so the conversation is less about a single robot and more about the full edge-AI pipeline needed to make one practical. https://www.advantech.com/emt/products/8d5aadd0-1ef5-4704-a9a1-504718fb3b41/afe-a702/mod_13487539-d213-4c8f-a027-4be489e0fe1a
The live view makes that idea concrete. Four GMSL cameras and depth sensing are used to build a machine view of the scene, while segmentation separates people, objects, and other obstacles into semantic classes. That is the core requirement for humanoids, AMRs, and service robots: not just video capture, but low-latency perception, sensor fusion, and AI models that can support navigation, obstacle avoidance, workspace awareness, and interaction in dynamic environments at the edge.
What makes the platform interesting is the surrounding ecosystem rather than the compute figure alone. Advantech is positioning the AFE-A702 inside a broader robotics stack with integrated camera drivers, support for LiDAR, IMU and other sensors, JetPack 7, ROS 2 oriented tooling through Advantech Robotic Suite, and links to Isaac ROS, simulation, and deployment workflows. Filmed at Embedded World 2026 in Nuremberg, the demo fits the current shift toward production-ready robotics platforms that reduce custom integration work and shorten the path from prototype to field.
The technical message is clear: humanoid and mobile robotics now depend on scalable multi-sensor vision more than on isolated controller boards. Advantech’s recent updates around validated GMSL camera integration and Jetson Thor class robotics controllers suggest it wants to cover the whole route from thermal and mechanical design-in to perception software and AI inference. In that context, this demo is really about how a robot can turn synchronized camera and depth streams into a usable understanding of people, obstacles, and free space fast enough to deploy
All my Embedded World videos are in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga



