NXP Edge AI: Zephyr RTOS + eIQ Neutron NPU, moving ML from cloud to the edge

Posted by – December 18, 2025
Category: Exclusive videos

NXP’s Mike Preser talks about the shift from cloud-only AI to edge inference, where ML runs directly on embedded silicon for lower latency, lower bandwidth, and tighter power budgets—especially in vision, audio, and sensor-fusion workloads that can’t always stream raw data upstream https://www.nxp.com/design/design-center/development-boards-and-designs/ARA-2-2M-MODULE


HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.

A concrete example is pairing NXP platforms with Kinara’s Ara-2 neural accelerator, which targets up to 40 eTOPS for on-device inference. In practice that’s about pushing transformer and CNN workloads through quantized INT8/INT4 pipelines, keeping memory traffic and thermals under control while still enabling LLM, VLM, and multimodal perception close to the camera or gateway device

Rather than “edge vs cloud,” the pattern described is a split pipeline: train and evaluate in the cloud, then compile, calibrate, and deploy optimized models on-device, with selective telemetry back to the cloud for monitoring and continuous improvement. NXP leans heavily on software here—toolchains, SDKs, and ML runtimes, plus strong engagement with open source ecosystems like Zephyr RTOS to keep the developer stack portable

The broader context is also supply and geography: NXP is headquartered in Austin, while expanding Europe manufacturing alignment via the ESMC joint venture in Dresden with partners including TSMC, Bosch, and Infineon. Filmed on the Embedded World North America 2025 show floor, the takeaway is that edge AI is now less about “can it run” and more about repeatable deployment: performance-per-watt, software maturity, and a clean path from prototype to product

I’m publishing about 90+ videos from Embedded World North America 2025, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Join https://www.youtube.com/charbax/join for Early Access to all 90 videos (once they’re all queued in next few days) Check out all my Embedded World North America videos in my Embedded World playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=HUrm5GOJtTA