Renesas uses this demo to show how edge AI is moving from simple vision classification into closed-loop robot control. The first setup combines an off-the-shelf dexterous hand with an RZ/V2H board, where a camera tracks human hand gestures, runs local inference, and maps the result to motors and axes so the robot hand mirrors the operator in real time. It is a practical example of embedded vision, gesture recognition, motor control, and low-latency human-machine interaction coming together on one platform. https://www.renesas.com/en
What makes the RZ/V2H part interesting here is not just raw AI throughput, but the system balance behind it. Renesas positions it for robotics and vision AI with multicore processing, DRP-AI acceleration, image-processing capability, and support for multiple camera streams, which fits workloads such as hand tracking, perception fusion, and coordinated motion. In this context the demo is less about a robotic hand alone and more about how sensor input, inference, and actuator control can be collapsed into a compact edge robotics design.
The second demo shifts toward collaborative robotics and tool assistance. Here, a robotic arm based on the RZ/V2N platform accepts both voice commands and hand gestures, running in a ROS 2 architecture to identify a requested tool, move to the right position, and present it to the operator. That makes the story broader than vision AI: it becomes a multimodal interface problem involving speech, gesture, robot middleware, task flow, and safe human-robot collaboration on the edge.
MXT’s role adds another useful layer, because this is not only a silicon story but also an ecosystem story. As a Renesas preferred partner, MXT has worked with Renesas across modules, evaluation kits, and custom boards, and the board shown here is described as a Raspberry Pi form factor design that can work with existing expansion hardware. That matters for faster prototyping, easier integration, and lower friction when developers want to move from proof of concept to a more product-like robotics platform.
Seen from Embedded World 2026 in Nuremberg, these demos reflect where industrial and service robotics are heading: more cameras, more AI models, more joints, more natural interfaces, and tighter integration between Linux, ROS 2, vision pipelines, and motor control. The most useful takeaway is not hype around humanoids, but the way Renesas is stacking practical building blocks for gesture-controlled manipulators, voice-driven cobots, and embedded robot perception where latency, power, and system cost still matter.



