Synaptics has introduced the SR100, part of its new SR series of AI microcontrollers designed for vision and audio-based inferencing. The SR100 integrates an Arm Cortex-M55 microcontroller and an Ethos-U55 neural processing unit, alongside a Cortex-M4 and Synaptics’ proprietary neural processing unit. This combination enables efficient on-device AI processing, making it ideal for applications requiring intelligent perception, such as video doorbells, security cameras, and smart audio recognition. More details can be found at https://www.synaptics.com
—
Synaptics is my Embedded World 2025 video coverage sponsor, check out my Synaptics videos here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvhAbQoe9YN4c84SqXxIY3fQ
—
A key innovation in the SR100 is its power-efficient architecture, which operates in multiple “gears” depending on the computational load. In ultra-low-power mode, the device can sample images at predefined frame rates and store them in a buffer. A dedicated change-detection circuit analyzes consecutive frames to identify movement, allowing the system to selectively wake up for further processing. This hierarchical approach optimizes battery life while maintaining responsiveness, making it particularly useful for battery-powered AI applications.
For vision-based applications, the SR100 supports multiple camera interfaces, including MIPI CSI, parallel, and SPI inputs. The device can execute inference tasks such as object detection, facial recognition, barcode scanning, and scene segmentation. Developers can configure these capabilities through programmable state machines that define wake-up triggers based on visual input. This flexibility allows devices to remain in a near-idle state until an event of interest is detected, at which point higher-power processing can be activated.
Audio AI is another key capability, with support for up to four Pulse Density Modulation (PDM) microphones. The SR100 can process voice activity detection at the hardware level, enabling efficient always-listening modes for wake-word detection or event-triggered audio recording. Because this processing happens at the edge, without cloud dependency, privacy-sensitive applications can leverage AI without transmitting raw data externally.
By using an energy-efficient mix of hardware logic and Arm-based processing cores, the SR100 enables a wide range of intelligent embedded applications. Security cameras, for instance, can differentiate between general movement and human presence, while wearable devices can analyze environmental audio cues to provide smart assistance. The AI capabilities of the M55 and U55 make real-time inferencing possible within the constraints of embedded devices, opening up new use cases for perceptive AI.
Developers can scale inference complexity based on available RAM and flash memory. While the model size determines the processing limits, the SR100 is designed to efficiently handle lightweight yet effective AI tasks. The combination of Cortex-M55 and U55 enables advanced edge processing while keeping power consumption low, making it a strong candidate for widespread IoT adoption.
A demonstration at Embedded World 2025 showcased the SR100 in action, detecting human presence and capturing images only when relevant activity was detected. The oscilloscope readout illustrated the system’s power efficiency, with the device consuming just 2.6mA in its low-power monitoring state. This capability is crucial for battery-powered applications where continuous operation is required without draining energy resources.
The future of embedded AI lies in perceptive, energy-efficient systems capable of making intelligent decisions without cloud dependence. Synaptics envisions a world where AI-powered devices seamlessly enhance daily life, from elder care monitoring to advanced security systems. As more industries integrate embedded AI, solutions like the SR100 will play a pivotal role in making on-device intelligence both practical and accessible.
Check out all my Embedded World 2025 videos in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga
This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK
Join https://www.youtube.com/charbax/join for Early Access to my videos and to support my work, or you can Click the “Super Thanks” button below the video to send a highlighted comment!