Edge Impulse (now part of Qualcomm Technologies) walks through an edge generative-AI demo built on an Advantech AIR-055 industrial box powered by a Qualcomm Dragonwing IQ-9075 (IQ9 series) rated up to 100 TOPS. The setup uses a small “parking lot” scene to show how computer-vision signals can be turned into useful language outputs locally, without pushing raw video to a server. https://www.edgeimpulse.com/
—
HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.
—
The core pattern is a two-stage pipeline: a live camera feed is ingested via Qualcomm’s video/vision stack (IM SDK), then a YOLO-style object detector is trained and deployed quickly in Edge Impulse to identify vehicles and extract structured facts (class, bounding box, confidence, and any extra attributes you train). That compact, machine-readable payload is then passed to an on-device LLM so you can ask constrained questions like color, make/model, or visible damage, while keeping outputs more predictable at the edge.
A big theme is why on-device matters in real deployments: bandwidth and latency ceilings, intermittent connectivity, and privacy constraints that make “ship everything to cloud” fragile or expensive. AIR-055 targets production form factors with local Linux-style development and debugging, and the IQ9 platform is positioned for running billion-parameter class models in practical response time using heterogeneous acceleration (CPU/GPU plus Hexagon NPU) when the workload fits.
The demo generalizes beyond cars: the same vision-to-LLM handoff maps to PPE detection, safety and incident triage, vehicle crash detection, inspection workflows, and retail or public-safety analytics. The interesting technical takeaway is model cascading: let a fast detector or VLM front-end compress pixels into a clean semantic summary, then let the LLM reason over that summary for audit-friendly results at the edge, right now.
I’m publishing about 90+ videos from Embedded World North America 2025, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Join https://www.youtube.com/charbax/join for Early Access to all 90 videos (once they’re all queued in next few days) Check out all my Embedded World North America videos in my Embedded World playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga
This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK
Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁
Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY



