Allwinner A80T big.LITTLE is shipping now

CodeWrights at Embedded World 2025 #ew25 Embedded Linux Services, Cyber Resilience Act Consulting

Posted by – March 24, 2025
Category: Exclusive videos

At Embedded World 2025, CodeWrights, based in Karlsruhe, Germany, showcased their expertise in software development for measurement device manufacturers within the automation technology sector. Their services encompass the entire product development lifecycle, with a particular emphasis on embedded Linux solutions. This includes assisting clients in integrating and optimizing embedded Linux systems tailored to specific measurement devices, ensuring seamless functionality and performance.


Synaptics is my Embedded World 2025 video coverage sponsor, check out my Synaptics videos here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvhAbQoe9YN4c84SqXxIY3fQ

A significant focus at their booth was the upcoming European Union Cyber Resilience Act (CRA), which entered into force on December 10, 2024, and will be applicable from December 11, 2027. This regulation mandates stringent cybersecurity requirements for products with digital elements, aiming to enhance resilience against cyberattacks across the EU. CodeWrights offers consulting services to help device manufacturers navigate these new regulations, starting with comprehensive gap analyses to identify areas needing compliance improvements.

The company reported engaging with numerous visitors at Embedded World 2025, reflecting the industry’s keen interest in both hardware components and software services. With a team of 50 employees, predominantly software developers, CodeWrights combines technical proficiency with dedicated marketing and sales teams to deliver tailored solutions to their clients.

In addition to their service offerings, CodeWrights is actively expanding its team, seeking new members to join their embedded sector initiatives. This growth aligns with their commitment to addressing the evolving challenges in automation technology and cybersecurity compliance.

Check out all my Embedded World 2025 videos in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Join https://www.youtube.com/charbax/join for Early Access to my videos and to support my work, or you can Click the “Super Thanks” button below the video to send a highlighted comment!

Microchip SAMA7G54 at Embedded World 2025 #ew25 AI-Enhanced Truck Loading Bay Monitoring Demo

Posted by – March 24, 2025
Category: Exclusive videos

At Embedded World 2025, Microchip Technology showcased a truck loading bay monitoring demonstration, highlighting the capabilities of their SAMA7G54 microprocessor. This Arm Cortex-A7-based MPU, operating up to 1GHz, integrates advanced imaging and audio subsystems, including a MIPI CSI-2 camera interface, facilitating real-time object detection and machine learning applications.


Synaptics is my Embedded World 2025 video coverage sponsor, check out my Synaptics videos here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvhAbQoe9YN4c84SqXxIY3fQ

The demonstration utilized Microchip’s Ampulse toolset to develop a custom machine learning model capable of detecting trucks within the loading bay. Each detected truck was represented by a blue point on the display, turning the corresponding spot red when a truck occupied it, thereby providing an intuitive visualization of the monitoring system’s functionality.

In addition to the truck loading bay monitoring demo, Microchip’s booth featured various other interactive exhibits. Notably, a setup incorporating Lego structures and a multitude of sensors showcased the versatility and integration capabilities of Microchip’s embedded solutions in diverse applications.

The SAMA7G54 microprocessor supports up to 2GB of DDR memory and includes interfaces such as dual Ethernet ports (Gigabit and 10/100), multiple CAN-FD channels, and high-speed USB connections. These features make it suitable for industrial and automotive applications requiring robust connectivity and real-time data processing.

Microchip’s commitment to providing comprehensive development support is evident through their SAMA7G54-EK evaluation kit. This kit offers connectors and expansion headers for easy customization, facilitating rapid prototyping and integration into various embedded systems.

The integration of advanced peripherals, such as the MIPI CSI-2 camera interface, allows developers to implement low-power stereo vision applications with enhanced depth perception. This capability is particularly beneficial for applications in machine vision and automation, where accurate environmental mapping is crucial.

Microchip’s participation in Embedded World 2025 underscores their dedication to advancing embedded control solutions. By offering products like the SAMA7G54, they enable developers to create efficient, high-performance applications across various industries, from industrial automation to consumer electronics.

For more information on Microchip’s products and solutions, visit their official website: https://www.microchip.com

Check out all my Embedded World 2025 videos in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Join https://www.youtube.com/charbax/join for Early Access to my videos and to support my work, or you can Click the “Super Thanks” button below the video to send a highlighted comment!

EDGE AI FOUNDATION at Embedded World 2025: Advancements in Edge Computing and AI Integration

Posted by – March 23, 2025
Category: Exclusive videos

Pete Bernard, Executive Director of the EDGE AI FOUNDATION, discusses the organization’s mission and activities. The EDGE AI FOUNDATION, formerly known as the tinyML Foundation, is a global non-profit community dedicated to innovation, collaboration, advocacy, and education in energy-efficient, affordable, and scalable edge AI technologies. citeturn0search1 Their goal is to democratize edge AI technology, making it accessible and impactful for all while fostering sustainability and responsible practices.


Synaptics is my Embedded World 2025 video coverage sponsor, check out my Synaptics videos here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvhAbQoe9YN4c84SqXxIY3fQ

The foundation engages in various initiatives, including technical and business working groups that focus on best practices in areas like audio AI, neuromorphic computing, and generative AI on the edge. They collaborate with technology partners such as Qualcomm, Edge Impulse, and NXP to drive innovation in the edge AI space. citeturn0search1 Additionally, the foundation emphasizes educational efforts by developing curricula for universities, offering scholarships, and educating end-users and companies about the potential of edge AI technologies.

At Embedded World 2025, held from March 11-13 in Nuremberg, Germany, the EDGE AI FOUNDATION showcased their commitment to connecting AI to real-world applications. Their booth featured the winners of the BLUEPRINT Awards for outstanding edge AI solution deployments and organized a scavenger hunt to highlight AI’s presence in various locations around the event. They also sponsored the IoT Stars event, where Pete Bernard participated in a fireside chat, and hosted technical talks on state-of-the-art developments in edge AI.

The foundation’s rebranding from tinyML to EDGE AI FOUNDATION reflects the rapid evolution and expanding scope of edge AI technologies. This change signifies their dedication to embracing the enormous potential for edge AI in real-world applications and uniting diverse industry leaders, researchers, and practitioners to drive collective progress.

Through partnerships with academia and industry, the EDGE AI FOUNDATION aims to bridge the gap between research and practical deployment. They focus on providing resources such as high-quality datasets, models, and code to support the development of small neural networks tailored for specific tasks. This approach ensures that advancements in edge AI technology benefit society and the environment.

The foundation also emphasizes responsible AI practices, supporting efforts for sustainable and ethical AI through collaborations with NGOs and partner organizations. By fostering a diverse community and sharing knowledge, they aim to inspire breakthroughs and unlock opportunities across various industries.

Their global events, such as the upcoming EDGE AI FOUNDATION Austin 2025, provide platforms for industry experts, researchers, and enthusiasts to connect, innovate, and deploy cutting-edge edge AI technologies. These gatherings highlight how edge AI is driving agile, adaptable, and powerful solutions across various sectors.

The EDGE AI FOUNDATION continues to be a pivotal force in the edge AI community, driving innovation and collaboration to shape the future of AI at the edge. Their efforts ensure that edge AI technologies are not only advanced but also accessible and beneficial to a broad spectrum of applications and industries.

Check out all my Embedded World 2025 videos in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Join https://www.youtube.com/charbax/join for Early Access to my videos and to support my work, or you can Click the “Super Thanks” button below the video to send a highlighted comment!

Arcane Four TeleCANesis at Embedded World 2025: Streamlined data integration with CAN, MQTT, ZeroMQ

Posted by – March 23, 2025
Category: Exclusive videos

At Embedded World 2025, Arcane Four unveiled TeleCANesis, a tool designed to streamline data transfer across various systems and transport protocols. This solution minimizes the need for repetitive boilerplate code, enabling rapid setup of connections and ensuring seamless data flow within applications. The demonstration showcased TeleCANesis operating on a Linux system powered by the i.MX 8M Plus processor.


Synaptics is my Embedded World 2025 video coverage sponsor, check out my Synaptics videos here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvhAbQoe9YN4c84SqXxIY3fQ

TeleCANesis offers two primary tools: a web-based system architecture interface and an extension for Visual Studio Code (VS Code). The web-based tool allows system architects to design and map out data flow by creating “blueprints.” Users can drag and drop various “capsules,” each representing different system components, such as QNX capsules or Linux cloud capsules. Connectors like CAN receivers, MQTT receivers, and ZeroMQ transmitters facilitate data routing between these capsules, ensuring efficient communication across the system.

For engineers, the VS Code extension provides a more hands-on approach. By importing messages from a DBC file—commonly used in automotive contexts to define CAN bus messages—engineers can create internal representations that the TeleCANesis engine comprehends. This process involves setting up receivers and transmitters, such as a socket CAN receiver for Linux systems and a Storyboard IO transmitter for user interfaces. This integration within VS Code offers engineers a familiar environment to configure and manage data flows effectively.

Arcane Four’s experience in systems integration has led to the development of TeleCANesis, addressing the challenges of modern, complex systems that require multiple interconnected components, including cloud servers and various devices. By reducing the need for repetitive coding, TeleCANesis enhances development efficiency and ensures consistent data flow across diverse platforms.

The company’s business model centers on providing development tools like TeleCANesis, granting users access to these advanced features. Arcane Four is based in Ottawa, Canada, and continues to focus on bridging the gap between hardware and software in embedded systems.

Check out all my Embedded World 2025 videos in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Join https://www.youtube.com/charbax/join for Early Access to my videos and to support my work, or you can Click the “Super Thanks” button below the video to send a highlighted comment!

Clarinox at Embedded World 2025 #ew25 BLE Channel Sounding, Auracast, Wi-Fi 6 & more

Posted by – March 23, 2025
Category: Exclusive videos

At Embedded World 2025, Clarinox Technologies, headquartered in Melbourne, Australia, showcased its advanced wireless protocol stacks, including Bluetooth Low Energy (BLE), Bluetooth Classic, and Wi-Fi. The company’s Chief Technology Officer, Gokan Teri, highlighted their focus on channel sounding—a cutting-edge BLE technology for precise distance measurement and location tracking based on time-of-flight principles.


Synaptics is my Embedded World 2025 video coverage sponsor, check out my Synaptics videos here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvhAbQoe9YN4c84SqXxIY3fQ

Clarinox’s protocol stacks are designed for compatibility with various chipsets. For Bluetooth applications, they support any component adhering to the Host Controller Interface (HCI) standard. In Wi-Fi solutions, Clarinox collaborates with NXP and Texas Instruments, utilizing chipsets like NXP’s RW612 and Texas Instruments’ CC33 and 335 series, which integrate application processors with Wi-Fi 6 and BLE 5.4 capabilities.

A notable feature demonstrated was Auracast, a Bluetooth streaming technology enabling synchronized audio transmission to multiple receivers without the need for individual connections. This connectionless approach, akin to multicast streaming, is ideal for public venues, allowing users to seamlessly receive audio streams and manage personal communications, such as incoming calls, without disruption.

Clarinox emphasizes robust partnerships across operating system providers, chip manufacturers, and module producers to ensure seamless integration and performance of their wireless solutions. Their debugging tool, ClariFi, enhances development efficiency by capturing and visualizing complex scenarios, such as Wi-Fi mesh networks, and supports audio quality analysis by recording audio streams in various codecs, including the latest LC3 codec.

The company’s core offerings include licensing their Bluetooth and Wi-Fi protocol stacks, providing clients with fully functional applications that operate out-of-the-box. This approach allows customers to bypass intricate low-level configurations, expediting development timelines and reducing complexity.

Beyond their Melbourne headquarters, Clarinox maintains offices in Chennai, India, and Izmir, Turkey, reflecting their global presence and commitment to supporting a diverse client base across multiple regions.

Check out all my Embedded World 2025 videos in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Join https://www.youtube.com/charbax/join for Early Access to my videos and to support my work, or you can Click the “Super Thanks” button below the video to send a highlighted comment!

Imagination Technologies at Embedded World 2025: Open-Source GPU Drivers, Virtualization, AI

Posted by – March 22, 2025
Category: Exclusive videos

At Embedded World 2025, Imagination Technologies showcased a range of advancements in GPU technology and open-source initiatives. One highlight was a demonstration of a large language model (LLM) implemented by a partner on an xdx PCI Express card. This setup utilized a general-purpose GPU (GPGPU) with a compute pipeline, offering a more affordable and flexible alternative to larger GPUs in server environments.


Synaptics is my Embedded World 2025 video coverage sponsor, check out my Synaptics videos here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvhAbQoe9YN4c84SqXxIY3fQ

Another significant showcase was the introduction of Imagination’s open-source driver stack. Demonstrated on the BeaglePlay board from the Beagle Foundation, this low-cost, low-power single-board computer features Texas Instruments’ Sitara AM625 SoC, which integrates Imagination’s AXE-1-16M GPU. The open-source driver stack provides native Vulkan support, enabling translation layers like Zink to facilitate OpenGL ES compatibility.

Imagination also presented a driver monitoring system leveraging video processing to assess driver attentiveness. This system employs OpenCL compute libraries running on the GPU’s compute pipeline to execute AI software, highlighting the GPU’s versatility in safety-critical applications.

Additionally, the company showcased hardware GPU virtualization capabilities using Texas Instruments’ AM69 SoC, which houses the BXS-4-64 GPU. This demonstration featured multiple applications running on separate hardware interfaces of the GPU, known as hyperlanes. These dedicated hardware interfaces allow direct communication between the GPU and virtual machines, ensuring near-native performance without the need for software-based virtualization.

Imagination’s commitment to open-source development was further underscored by their release of open-source drivers for their PowerVR Rogue architecture GPUs. This initiative enables developers and OEMs to have greater control over their graphics software stacks, promoting flexibility and long-term support across various platforms.

The company’s engagement with the RISC-V ecosystem was also evident, as they have become a preferred choice for RISC-V SoCs. This collaboration aims to bring advanced graphics capabilities to RISC-V platforms, expanding the reach of Imagination’s GPU technologies.

Throughout the event, Imagination Technologies emphasized their focus on delivering flexible, high-performance GPU solutions tailored for embedded systems, industrial applications, and AI workloads. Their open-source initiatives and hardware virtualization features position them as a key player in the evolving landscape of embedded computing.

Check out all my Embedded World 2025 videos in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Join https://www.youtube.com/charbax/join for Early Access to my videos and to support my work, or you can Click the “Super Thanks” button below the video to send a highlighted comment!

Morse Micro Wi-Fi HaLow at #ew25 MM6108 and MM8102 SoCs, HaLowLink 1 Router, VT-USB-AH-8108 Dongle

Posted by – March 22, 2025
Category: Exclusive videos

At Embedded World 2025, Morse Micro showcased its advancements in Wi-Fi HaLow technology, emphasizing its potential to revolutionize IoT connectivity. Wi-Fi HaLow operates in the sub-1 GHz ISM bands, specifically 863-868 MHz in Europe and 902-928 MHz in the Americas, offering enhanced range and building penetration compared to traditional Wi-Fi frequencies. This technology leverages Wi-Fi modulation techniques to provide higher throughput than protocols like LoRa, supporting data rates up to 32.5 Mbps.


Synaptics is my Embedded World 2025 video coverage sponsor, check out my Synaptics videos here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvhAbQoe9YN4c84SqXxIY3fQ

Morse Micro’s MM6108 System-on-Chip (SoC) integrates radio, PHY, and MAC functions in compliance with the IEEE 802.11ah standard. This single-chip solution supports flexible RF interfaces, allowing for on-chip amplification or the use of external power amplifiers and front-end modules for ultra-long-reach applications. The MM6108’s efficient design ensures extended sleep times and reduced power consumption, making it ideal for battery-operated IoT devices.

The company also introduced the MM8102 Wi-Fi HaLow SoC, tailored for the European and Middle Eastern markets. Optimized for 1 MHz and 2 MHz bandwidths with 256-QAM modulation, the MM8102 achieves throughputs up to 8.7 Mbps. Operating in the sub-GHz ISM bands, it offers greater range and signal penetration than conventional Wi-Fi networks. The MM8102 complies with regional regulatory requirements, simplifying development for IoT device manufacturers.

For developers, Morse Micro offers evaluation platforms like the HaLowLink 1, a combined Wi-Fi 4 and Wi-Fi HaLow router. This platform enables easy assessment of Wi-Fi HaLow’s capabilities and facilitates integration into existing infrastructures. Additionally, the MM6108-EKH03 development platform provides seamless connectivity and supports various applications, ensuring security and reliability in IoT deployments.

Morse Micro’s technology has been integrated into products from various partners. For instance, Vantron’s VT-USB-AH-8108 Wi-Fi HaLow dongle, powered by the MM8108 chipset, delivers up to 43 Mbps connectivity and is designed for plug-and-play integration into existing systems. This collaboration highlights the growing demand for long-range, low-power connectivity solutions in the IoT ecosystem.

The company’s Wi-Fi HaLow solutions are particularly beneficial for applications requiring extended range and robust connectivity, such as industrial automation, smart metering, and perimeter security. By operating in the sub-1 GHz bands, Wi-Fi HaLow ensures reliable performance in challenging environments, making it suitable for both indoor and outdoor IoT applications.

Morse Micro’s commitment to advancing Wi-Fi HaLow technology positions it as a leader in the IoT connectivity landscape. By addressing the limitations of traditional Wi-Fi and offering scalable, energy-efficient solutions, the company is set to drive the next wave of IoT innovations across various sectors.

Check out all my Embedded World 2025 videos in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Join https://www.youtube.com/charbax/join for Early Access to my videos and to support my work, or you can Click the “Super Thanks” button below the video to send a highlighted comment!

Raspberry Pi at Embedded World 2025: Compute Module 5, AI Hat Plus, IMX500 Sensor, Hailo Accelerator

Posted by – March 22, 2025
Category: Exclusive videos

Raspberry Pi presented its latest Compute Module 5 (CM5) at Embedded World 2025, highlighting significant enhancements over the Compute Module 4. The CM5 offers double the performance through updated hardware and incorporates additional peripherals, providing improved flexibility for diverse applications. Raspberry Pi also displayed a dedicated Compute Module I/O board designed specifically for seamless integration with the CM5, encouraging developers to build custom I/O boards tailored to their specific requirements. More details are available at https://www.raspberrypi.com.


Synaptics is my Embedded World 2025 video coverage sponsor, check out my Synaptics videos here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvhAbQoe9YN4c84SqXxIY3fQ

In addition, Raspberry Pi showcased its advanced AI imaging solutions, prominently featuring the Sony IMX500 sensor-based AI cameras. These cameras integrate AI processing directly on-chip, enabling real-time applications such as person counting and object classification without external processing units. The demonstrated products highlighted edge-based AI acceleration, significantly optimizing resource usage and enhancing privacy by processing data locally.

Another key feature introduced was the AI Hat Plus equipped with a Hailo AI accelerator, specifically compatible with the Raspberry Pi 5. This solution significantly enhances machine-learning capabilities, including facial recognition, and can utilize various camera inputs, ranging from official Raspberry Pi camera modules to standard USB webcams. The demonstration at the booth included live facial recognition with instant identification using locally trained data.

The Raspberry Pi booth further emphasized real-world applications, featuring various partner implementations utilizing Raspberry Pi hardware for industrial and commercial environments. These included robust industrial solutions like the Revolution Pi, known for its reliability in demanding conditions. Additionally, practical deployments such as digital signage systems demonstrated Raspberry Pi’s adaptability and widespread acceptance in diverse industry segments.

Despite a technical issue preventing live demonstrations, Raspberry Pi also planned to highlight their microcontroller offerings. These microcontrollers aim to complement their existing product lineup by addressing low-power, cost-effective applications where full-fledged computers would be excessive. Raspberry Pi continues its mission of enabling efficient solutions across embedded computing scenarios, from edge computing to complex AI-driven tasks.

The integration of Sony IMX500 image sensors into Raspberry Pi’s AI products underscores their push towards embedded vision applications, particularly for edge inference. The IMX500 cameras execute both image sensing and AI inference entirely on-chip, enabling efficient applications such as smart surveillance, retail analytics, and automated person counting without external processing overhead.

Industrial partners also prominently featured Raspberry Pi-based solutions at the event, reflecting the ecosystem’s maturity and Raspberry Pi’s strong foothold in industrial contexts. These partnerships underline a significant shift toward deploying Raspberry Pi hardware beyond educational and hobbyist sectors, emphasizing robust, scalable industrial applications where reliability, affordability, and flexibility are paramount.

Overall, Raspberry Pi’s presentation at Embedded World 2025 emphasized its strategic focus on AI acceleration, performance improvement, and versatility, aiming to extend the capabilities and appeal of their hardware solutions across professional and industrial applications. The introduction of specialized hardware like the Hailo accelerator alongside the Compute Module 5 reflects Raspberry Pi’s continued evolution to meet increasingly complex technical demands.

Check out all my Embedded World 2025 videos in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Join https://www.youtube.com/charbax/join for Early Access to my videos and to support my work, or you can Click the “Super Thanks” button below the video to send a highlighted comment!

Seeed Studio at Embedded World 2025: Edge Computing, AIoT Solutions, reComputer Series, BeaglePlay

Posted by – March 21, 2025
Category: Exclusive videos

At Embedded World 2025, Seeed Studio showcased its latest advancements in edge computing and AIoT solutions. A notable highlight was their chatbot device, capable of processing natural language queries and delivering responses both audibly and visually. This integration leverages cloud connectivity and large language models to facilitate seamless human-computer interactions.


Synaptics is my Embedded World 2025 video coverage sponsor, check out my Synaptics videos here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvhAbQoe9YN4c84SqXxIY3fQ

Another prominent exhibit was the manipulation robotic arm. Although a technical issue prevented a live demonstration, the arm is designed to mimic human gestures through manual training via a handling bar. This approach allows the robotic arm to learn and replicate complex movements, enhancing its utility in automation tasks.

Seeed Studio also introduced the reComputer series, powered by NVIDIA Jetson platforms. These edge computing devices are tailored for AI applications, offering robust processing capabilities in a compact form factor. They serve as versatile solutions for developers aiming to deploy machine learning models at the edge, addressing the growing demand for localized data processing.

In collaboration with BeagleBoard.org and Texas Instruments, Seeed Studio has developed the BeaglePlay and BeagleV boards. These platforms cater to the open-source community, providing flexible hardware solutions for various applications, from educational tools to industrial projects. The BeagleV, in particular, features a RISC-V architecture, reflecting the industry’s shift towards open instruction set computing.

The XIAO series, known for its compact design, was also on display. These microcontroller units, based on chipsets like the RP2040 and ESP32, offer a balance between size and functionality. They are ideal for wearable devices, DIY keyboards, and other projects where space is a constraint but performance remains crucial.

Seeed Studio’s Grove ecosystem continues to expand, offering a modular approach to sensor integration. This plug-and-play system simplifies the process of adding sensors to projects, making it accessible for both beginners and seasoned developers. The ecosystem supports a wide range of sensors, from environmental monitoring to motion detection.

During the event, Seeed Studio engaged with numerous system integrators and distributors, reflecting its commitment to collaboration and community building. These interactions underscore the company’s role in fostering innovation through partnerships and knowledge sharing.

For more information on Seeed Studio’s products and services, visit their official website: https://www.seeedstudio.com/

Check out all my Embedded World 2025 videos in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Join https://www.youtube.com/charbax/join for Early Access to my videos and to support my work, or you can Click the “Super Thanks” button below the video to send a highlighted comment!

Synaptics LE Audio at Embedded World 2025: 4382 Triple Combo Chip, Bluetooth, LC3 Codec, Thread IoT

Posted by – March 21, 2025
Category: Exclusive videos

Synaptics demonstrated their latest Bluetooth connectivity solution, the 4382 triple combo chip, featuring Wi-Fi, Bluetooth, and Thread capabilities. This demo particularly highlighted Bluetooth LE Audio, showcasing the chip’s capacity to handle multiple audio streams simultaneously, each in different languages. Such capability enables users to select personalized audio tracks directly through their Bluetooth LE Audio-compatible headsets, making it suitable for multilingual media consumption. Synaptics provides more information about their technology at https://www.synaptics.com.


Synaptics is my Embedded World 2025 video coverage sponsor, check out my Synaptics videos here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvhAbQoe9YN4c84SqXxIY3fQ

The practical demonstration showed content streamed to a headset in real-time, instantly switching languages upon use. An example given was a viewer watching a TV show or movie originally in English, who could choose to hear Spanish or German audio instead. This technology is particularly useful in scenarios like hosting international guests, allowing viewers to enjoy localized audio content without relying on subtitles.

Beyond entertainment, Bluetooth LE Audio has significant implications for gaming. Modern gaming consoles increasingly integrate social audio streams alongside gameplay audio, allowing gamers to hear in-game sounds and simultaneously communicate clearly with remote teammates. LE Audio supports this dual audio streaming with low latency, enhancing both interactivity and immersive gameplay experiences.

Additionally, LE Audio technology can be utilized in digital TVs and set-top boxes, expanding the possibilities for home multimedia setups. Synaptics emphasized that the technology is designed to meet the demands of high-end multimedia applications, enabling seamless synchronization of audio and video content across multiple devices. This integration ensures clear audio transmission, efficient bandwidth usage, and reduced latency.

The Synaptics 4382 chipset’s support for the Thread protocol also positions it favorably within smart home ecosystems. Thread offers a reliable, energy-efficient mesh network for IoT devices, complementing the connectivity provided by Wi-Fi and Bluetooth. This enables broader integration possibilities beyond multimedia, potentially extending into comprehensive smart-home solutions.

Bluetooth LE Audio introduces key advancements like the LC3 codec, which significantly improves audio quality at lower bit rates compared to traditional SBC codecs. This translates into better-sounding audio, extended battery life on wireless devices, and broader compatibility across numerous consumer electronics. With these technical advantages, LE Audio is poised to become the new industry standard for wireless audio transmission.

In summary, Synaptics is actively advancing the adoption of Bluetooth LE Audio through versatile solutions such as their 4382 triple combo chip. By enabling personalized audio streams in various languages, low-latency gaming audio, and high-quality multimedia experiences, LE Audio significantly upgrades how consumers interact with wireless audio technologies. As multimedia demands grow, such innovations will become essential in consumer electronics.

Check out all my Embedded World 2025 videos in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Join https://www.youtube.com/charbax/join for Early Access to my videos and to support my work, or you can Click the “Super Thanks” button below the video to send a highlighted comment!

Synaptics Bluetooth Channel Sounding #ew25 Proximity Security Smart Locks Automotive Automation

Posted by – March 21, 2025
Category: Exclusive videos

Anand Roy, Senior Product Line Manager at Synaptics, demonstrates Bluetooth Channel Sounding technology, which accurately measures distance based on packet exchanges between Bluetooth-enabled devices. This precise measurement triggers actions such as locking or unlocking devices depending on proximity. Synaptics showcases a practical demonstration where the screen locks automatically at distances beyond a predefined threshold (1.8 meters), and unlocks as the user moves closer again. For more details on Synaptics’ technology, visit https://www.synaptics.com.


Synaptics is my Embedded World 2025 video coverage sponsor, check out my Synaptics videos here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvhAbQoe9YN4c84SqXxIY3fQ

Bluetooth Channel Sounding operates by measuring the time and characteristics of Bluetooth signal packets exchanged between two paired devices, allowing highly accurate estimation of physical distance. This technology has notable implications for convenience, security, and automation. In practice, it enables use-cases like automated device unlocking without manual intervention, significantly streamlining user experiences in various everyday scenarios.

One of the most immediate applications demonstrated involves automatic locking and unlocking of screens. By setting distance thresholds, the system reacts dynamically to the user’s proximity, locking the screen when the paired device moves beyond the defined range, and unlocking when it returns within proximity. The demonstration showed the unlocking at around 1 meter (approximately 3 feet), locking when beyond 1.8 meters, and reliably unlocking again upon returning.

The potential for Bluetooth Channel Sounding extends into broader consumer markets, particularly home security and automation. Smart locks equipped with this technology can automatically detect the homeowner’s smartphone, unlocking doors without manual input, and conversely securing the premises automatically as they depart. This provides convenience along with enhanced security, preventing unauthorized access without the appropriate device authentication.

Automotive industries also represent a major sector for Bluetooth Channel Sounding adoption. Modern keyless entry systems rely on proximity detection to unlock vehicles automatically as owners approach, while remaining secured against unauthorized access attempts. Synaptics’ implementation promises improvements in the accuracy and reliability of these systems, potentially replacing or supplementing existing approaches that rely primarily on traditional RFID or proximity sensors.

Another significant advantage of Synaptics’ Bluetooth Channel Sounding technology is its robustness against interference. Unlike systems requiring direct line-of-sight, Bluetooth-based channel sounding remains effective even in environments where devices do not have an unobstructed path to each other. Thus, it performs reliably across varied conditions typical in residential or automotive contexts.

Overall, Bluetooth Channel Sounding technology from Synaptics addresses the increasing market demand for automated, secure, and user-friendly proximity-based interactions. The precise and reliable distance measurement enhances user convenience in numerous applications, from personal devices to home security and automotive entry systems. This technology exemplifies how accurate proximity sensing is becoming integral to smart connectivity ecosystems, providing both seamless user experiences and enhanced security capabilities.

Check out all my Embedded World 2025 videos in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Join https://www.youtube.com/charbax/join for Early Access to my videos and to support my work, or you can Click the “Super Thanks” button below the video to send a highlighted comment!

Grinn AstraSOM-1680 SoM with Synaptics SL1680 processor at Embedded World 2025 #ew25

Posted by – March 21, 2025
Category: Exclusive videos

At Embedded World 2025, Grinn unveiled their latest innovation: the AstraSOM-1680, a system-on-module (SoM) built around Synaptics’ Astra SL1680 processor. This processor features a quad-core Arm Cortex-A73 CPU and an 8 TOPS neural processing unit (NPU), designed to deliver high-performance edge AI capabilities. citeturn0search1 Grinn’s demonstration showcased the module’s ability to recognize various radio frequency (RF) modulations in real-time, highlighting its potential in edge computing applications.


Synaptics is my Embedded World 2025 video coverage sponsor, check out my Synaptics videos here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvhAbQoe9YN4c84SqXxIY3fQ

The AstraSOM-1680 integrates the SL1680 processor’s capabilities, including the PowerVR Series9XE GPU, 4GB of LPDDR4 memory, and 16GB of eMMC storage. citeturn0search1 This configuration ensures efficient handling of complex video inputs and outputs, making it suitable for real-time image processing and AI-driven tasks. The module’s architecture allows for the simultaneous operation of the NPU and CPU cores, ensuring that visual processing tasks do not hinder other system functions.

Grinn’s approach to edge AI emphasizes the integration of advanced processing capabilities directly within devices, reducing latency and enhancing data privacy by minimizing reliance on cloud-based computations. By treating cameras and sensors as intelligent entities, Grinn aims to unlock new applications across various industries. The company’s expertise in designing and producing these modules in Europe, specifically in Poland, ensures adherence to high-quality manufacturing standards.

With 17 years in the market, Grinn has established itself as a reliable partner for clients worldwide. Their team of engineers is dedicated to assisting customers in translating innovative ideas into market-ready products, leveraging the capabilities of modules like the AstraSOM-1680 to accelerate development cycles and improve efficiency.

Check out all my Embedded World 2025 videos in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Join https://www.youtube.com/charbax/join for Early Access to my videos and to support my work, or you can Click the “Super Thanks” button below the video to send a highlighted comment!

Synaptics at #ew25 shows Wi-Fi AI Sensing, breathing motion detection, CSI radar ultra wideband IoT

Posted by – March 20, 2025
Category: Exclusive videos

Ananda Roy, Senior Product Line Manager at Synaptics (https://www.synaptics.com), demonstrated their latest AI-enhanced Wi-Fi sensing technology at Embedded World 2025. Utilizing their compact Wi-Fi chip, the 43752, integrated into the Astra Machina development kit, Synaptics showcased a method of environmental sensing through Wi-Fi signal deflection. The chip operates by exchanging packets with a router, interpreting signal reflections to detect the presence and movements of people within a room.


Synaptics is my Embedded World 2025 video coverage sponsor, check out my Synaptics videos here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvhAbQoe9YN4c84SqXxIY3fQ

The technology relies on analyzing Channel State Information (CSI) from Wi-Fi signals, creating graphical representations that AI algorithms interpret. Synaptics has developed machine learning models capable of identifying subtle indicators, such as human breathing, which appear as regular, periodic wave patterns. Larger movements produce distinctly wider patterns, allowing clear differentiation between stationary presence and active motion.

Ananda explained that the AI model operates with defined confidence thresholds to accurately distinguish between various activities, ensuring reliable detections. Currently, the model is finely tuned to differentiate between stationary breathing and significant movements, but its capabilities extend beyond this initial scope. Through additional training, the AI can identify specific situations, such as falls, multiple occupants, or unique motion signatures, significantly broadening its potential applications in security, healthcare, and smart homes.

A key advantage highlighted by Ananda is the absence of additional hardware requirements. Since Wi-Fi is already standard in most IoT devices, Synaptics’ technology eliminates the need for supplementary sensors like radar or Ultra Wideband (UWB), enabling cost-effective and seamless integration. Devices equipped with existing Wi-Fi capabilities can immediately utilize this advanced sensing feature.

This approach leverages Wi-Fi’s widespread availability, making it practical for diverse environments. Synaptics’ system effectively uses environmental RF signal reflections, providing a non-invasive and privacy-friendly alternative to cameras or intrusive sensors. By harnessing inherent wireless infrastructure, it becomes possible to enhance existing IoT products, offering scalable sensing capabilities without extensive modifications.

Synaptics’ implementation demonstrates how AI-driven interpretations of Wi-Fi signals can significantly expand traditional connectivity roles, transforming passive network hardware into active sensing infrastructure. This technology presents valuable use cases, such as occupancy detection, elder care monitoring, fall detection, and home automation, demonstrating a notable step forward in smart environment interactions.

The demonstration at Embedded World 2025 showcased not only technical feasibility but practical versatility, emphasizing Synaptics’ approach toward integrated, minimally intrusive environmental sensing solutions. As the AI model evolves, Anand suggested it could be further trained to recognize specific patterns, such as counting occupants or detecting emergency events, enhancing its adaptability to diverse real-world scenarios.

Synaptics’ demonstration emphasized the shift from specialized, single-purpose sensors toward multifunctional use of commonplace technologies, highlighting the potential of Wi-Fi as a robust sensing medium. This advancement supports the integration of more intelligent, responsive environments without significant hardware complexity or cost overhead, representing a notable development for the IoT and connected device industries.

Check out all my Embedded World 2025 videos in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Join https://www.youtube.com/charbax/join for Early Access to my videos and to support my work, or you can Click the “Super Thanks” button below the video to send a highlighted comment!

Multi-Beamforming & Noise Reduction on Synaptics SL1620 AI-Driven at Embedded World 2025 #ew25

Posted by – March 20, 2025
Category: Exclusive videos

At Embedded World 2025, Synaptics, in collaboration with partners Eim and the Fraunhofer Institute, showcased a prototype featuring a six-microphone array capable of capturing audio from four distinct directions. This setup leverages advanced beamforming techniques to isolate and enhance audio signals from a targeted direction, effectively suppressing noise from other sources.


Synaptics is my Embedded World 2025 video coverage sponsor, check out my Synaptics videos here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvhAbQoe9YN4c84SqXxIY3fQ

Central to this demonstration is the Synaptics SL1620 System-on-Chip (SoC), running a Linux-based operating system. The SL1620 is engineered for embedded applications requiring robust processing capabilities, advanced artificial intelligence (AI) functionalities, and 3D graphics support. Its architecture includes a quad-core Arm® Cortex®-A55 CPU operating at up to 1.9GHz, an Imagination™ BXE-2-32 GPU, and support for dual displays via MIPI DSI interfaces. citeturn0search0

The prototype’s microphone array is integrated with AI-driven noise reduction algorithms, enhancing voice clarity by filtering out ambient noise. Users can manually select the desired audio beam or enable automatic beam selection, allowing the system to dynamically track and focus on the primary speaker. This functionality is particularly beneficial in environments such as home offices, call centers, or conference rooms, where multiple conversations may occur simultaneously.

The SL1620 SoC’s comprehensive feature set includes support for various peripherals, audio processing capabilities, and secure boot mechanisms, making it a versatile solution for diverse embedded applications. Its design emphasizes high performance per watt, catering to the demands of modern IoT devices that require efficient and secure processing. citeturn0search0

This demonstration underscores Synaptics’ commitment to advancing edge computing and AI integration in embedded systems, providing developers with the tools to create responsive and intelligent applications. By leveraging the SL1620’s capabilities, developers can implement sophisticated audio processing features, enhancing user experiences across various domains.

For more information on the SL1620 and other Synaptics products, visit their official website: https://www.synaptics.com

Check out all my Embedded World 2025 videos in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Join https://www.youtube.com/charbax/join for Early Access to my videos and to support my work, or you can Click the “Super Thanks” button below the video to send a highlighted comment!

Matter on Synaptics 461x family with Astra Dev Kit Zigbee Thread smart home control #ew25

Posted by – March 20, 2025
Category: Exclusive videos

Synaptics has introduced its new 461x family of wireless connectivity modules, which support the Matter standard, as demonstrated at Embedded World 2025. The demonstration was led by Anand Roy, Senior Product Line Manager for Wireless Connectivity, showcasing the Astra development kit. The kit includes Synaptics’ compact 461x module, designed for controlling smart home devices like smart bulbs. The full range of Synaptics’ products and capabilities can be explored further on their official website: https://www.synaptics.com.


Synaptics is my Embedded World 2025 video coverage sponsor, check out my Synaptics videos here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvhAbQoe9YN4c84SqXxIY3fQ

The showcased device utilizes the Matter connectivity standard, allowing seamless interoperability across various ecosystems. Matter compliance ensures compatibility between smart home devices from different manufacturers, streamlining user experience and device management. The Astra dev kit controls a smart light bulb, demonstrating real-time interactions and responsiveness via an intuitive application running on a Wi-Fi-connected tablet.

Synaptics’ 461x module supports both Zigbee and Thread protocols, which operate within the IEEE 802.15.4 standard. These low-power wireless technologies are essential for reliable connectivity and efficient device communication in home automation networks. By combining Matter with Zigbee and Thread support, the module ensures versatile deployment scenarios for developers and manufacturers of IoT products.

During the demonstration, the application controlled the connected smart bulb effortlessly. Actions included toggling the bulb’s power, adjusting brightness levels, and altering colors from orange to yellow and blue. The intuitive controls provided by the smart app highlight the usability enhancements enabled by Synaptics’ wireless connectivity modules.

Moreover, Synaptics emphasizes the importance of multimodal control options, demonstrating the 461x module’s capability in handling voice recognition and audio events. This feature expands user interactions beyond conventional apps, enabling voice-based commands for greater convenience in smart home environments. Such capabilities underline Synaptics’ commitment to enhancing smart home automation through versatile and accessible technologies.

The Astra development kit not only illustrates practical use-cases but also provides developers with essential tools for rapid prototyping and development of Matter-compatible IoT devices. The comprehensive functionality embedded in the 461x series processors accelerates the integration of voice-controlled applications into consumer electronics, significantly reducing time-to-market.

Synaptics’ demonstration at Embedded World 2025 underscores the growing industry focus on interoperability and ease-of-use in smart home environments. As consumer adoption of IoT devices continues to grow, Synaptics’ Matter-compatible modules offer significant potential for developers and device manufacturers seeking reliable, scalable connectivity.

Check out all my Embedded World 2025 videos in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Join https://www.youtube.com/charbax/join for Early Access to my videos and to support my work, or you can Click the “Super Thanks” button below the video to send a highlighted comment!

Talk to your Dish Washer & Smart Appliances with Voice FAQ LLM built on Synaptics SL1680 at #ew25

Posted by – March 20, 2025
Category: Exclusive videos

Synaptics has developed an innovative approach to integrating large language models (LLMs) into household appliances, demonstrated through a prototype dishwasher interface. By extracting common user inquiries from a dishwasher manufacturer’s user guide, they created a system capable of understanding and responding to voice commands without requiring internet connectivity. This offline functionality ensures user privacy and reliability.


Synaptics is my Embedded World 2025 video coverage sponsor, check out my Synaptics videos here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvhAbQoe9YN4c84SqXxIY3fQ

The core of this system is the Synaptics SL1680 processor, a highly integrated AI-native system-on-chip (SoC) optimized for multi-modal consumer, enterprise, and industrial IoT workloads. The SL1680 features a quad-core Arm® Cortex®-A73 64-bit CPU subsystem, a multi-TOPS neural processing unit (NPU), and a high-efficiency GPU for advanced graphics and AI acceleration. These components enable efficient on-device processing of complex AI tasks, such as natural language understanding and speech recognition. citeturn0search1

In the prototype, the system performs speech-to-text conversion to interpret user queries. It then vectorizes these queries and matches them against a pre-processed set of frequently asked questions using the NPU’s capabilities. Once the best match is identified, the system generates a voice response through text-to-speech synthesis, providing users with immediate assistance. For example, when asked, “How can I put plastic on my dishwasher?” the system responds with guidance on placing dishwasher-safe plastics on the top rack to avoid melting.

This integration of LLMs into appliances like dishwashers offers potential benefits beyond user convenience. Manufacturers could reduce the volume of calls to customer service centers by enabling appliances to address common user issues directly. Additionally, this technology could be applied to other devices, such as set-top boxes, to assist users in troubleshooting without external support.

The SL1680’s architecture supports running multiple AI models in parallel, allowing for a range of functionalities within a single device. This flexibility means that while the current implementation focuses on specific use cases, future developments could expand the system’s capabilities without significant hardware changes. For instance, users might issue commands like “Do it in a quick mode” without pressing any buttons, streamlining appliance operation.

Localization is another advantage of this technology. The system can be programmed to support multiple languages and understand various accents, making it adaptable to diverse markets. This adaptability ensures that users worldwide can interact naturally with their appliances, enhancing the overall user experience.

Updating the system with new information or responses is straightforward. Manufacturers can deploy updates over the internet or via mobile devices, ensuring that appliances remain current with the latest user guides and troubleshooting procedures. This ease of update allows for continuous improvement of the system’s responsiveness and accuracy.

The cost implications of integrating such advanced AI capabilities into appliances are manageable. Since the SL1680 SoC is designed for efficiency and performance, manufacturers can incorporate these features without substantial increases in production costs. This balance ensures that consumers receive enhanced functionality without a significant price hike.

Incorporating LLMs into household appliances represents a significant step toward more intuitive and user-friendly home environments. By leveraging Synaptics’ SL1680 processor, manufacturers can deliver smart appliances that not only perform their primary functions but also assist users through natural language interactions, setting the stage for a new era of intelligent home devices.

Check out all my Embedded World 2025 videos in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Join https://www.youtube.com/charbax/join for Early Access to my videos and to support my work, or you can Click the “Super Thanks” button below the video to send a highlighted comment!

Synaptics Astra Machina SL1680 at Embedded World 2025 #ew25 AI-native IoT processors, open-source

Posted by – March 20, 2025
Category: Exclusive videos

Synaptics showcased its Astra Machina Foundation Series developer kit at Embedded World 2025, part of the Dev Kit Zone organized by Embedded Computing Design. Dave Garrett, VP of Technology and Innovation at Synaptics, presented the Astra Machina, highlighting its substantial potential for developers working with edge computing, IoT, and neural networks. The Astra Machina kits include high-performance SL-Series processors featuring ARM Cortex A-class CPUs, GPUs, and integrated Neural Processing Units (NPUs), delivering up to eight TOPS of neural acceleration for advanced machine learning tasks.


Synaptics is my Embedded World 2025 video coverage sponsor, check out my Synaptics videos here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvhAbQoe9YN4c84SqXxIY3fQ

The Astra Machina development kits showcased at Embedded World 2025 run a Linux-based platform built on Yocto, accessible via Synaptics’ Embedded Software SDK (ESSDK) on GitHub. This allows developers quick integration with Python, HuggingFace models, and popular ML frameworks, significantly speeding up the prototyping process. The kits come ready with USB peripherals, GPIO, USB, I²C, I2S, HDMI, and display interfaces, making it easy to connect sensors, cameras, microphones, and other external peripherals essential for comprehensive edge applications.

Dave Garrett emphasized the importance of practical, hands-on experience, encouraging developers to experiment broadly with these kits to maximize their potential. At the Dev Kit Zone, hosted by Embedded Computing Design, Synaptics demonstrated real-world applications, including industrial-grade computer vision for automated pill sorting on conveyor systems. The demonstration showed Astra Machina’s capability to perform real-time, accurate image recognition and object classification directly at the edge, underscoring its suitability for various smart industrial and consumer applications.

Wireless connectivity is another key component of Astra Machina kits. Synaptics integrates their proprietary Wi-Fi and Bluetooth modules, enabling seamless IoT connectivity essential for diverse applications from smart homes to industrial automation. This integrated wireless capability helps developers easily deploy comprehensive IoT ecosystems without the need for additional complex hardware configurations.

Synaptics maintains a strong commitment to open-source software and has collaborated with Google Research on open-source compilers for machine learning networks. This partnership supports extensive compatibility with popular neural network frameworks, enabling developers to rapidly deploy sophisticated ML models across Synaptics’ diverse hardware platforms. The intent is clear: facilitating interoperability and simplifying developer experiences by avoiding proprietary lock-ins.

Developers interested in the Astra Machina Foundation Series can purchase kits immediately from authorized distributors like Mouser or directly via Synaptics’ developer portal (https://www.synaptics.com). Garrett emphasized the ease of setup, claiming developers can begin running neural networks within an hour of unboxing. He encouraged users to actively experiment, tweak, and innovate, pointing out that such iterative, hands-on approaches often lead to the most successful and impactful IoT products.

Beyond Astra Machina, Synaptics also previewed upcoming Reference Design Kits (RDKs), targeting ultra-low-power applications, capable of performing visual inference tasks in the milli-amp range. These kits, suitable for battery-powered applications, complement the higher-end performance seen in Astra Machina, offering flexible options to cater to varying edge-computing needs.

Overall, Synaptics’ participation in Embedded World 2025 highlighted its strategic push into powerful, open, and accessible development kits, designed explicitly to accelerate real-world innovation in IoT and edge computing technologies.

Check out all my Embedded World 2025 videos in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Join https://www.youtube.com/charbax/join for Early Access to my videos and to support my work, or you can Click the “Super Thanks” button below the video to send a highlighted comment!

Edge AI for Anomaly Detection shown by 42T at Embedded World 2025 #ew25 using Synaptics Astra SL1680

Posted by – March 19, 2025
Category: Exclusive videos

Jamie from 42 Technology (https://www.42technology.com) showcases their application for automated line clearance at Embedded World 2025. Utilizing embedded edge AI technology, this demonstration addresses real-time quality control and anomaly detection specifically designed for pharmaceutical manufacturing processes. The system employs industrial-grade cameras and Synaptics’ Astra SL1680 edge AI processor to monitor pill production lines, accurately detecting contaminants and process irregularities.


Synaptics is my Embedded World 2025 video coverage sponsor, check out my Synaptics videos here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvhAbQoe9YN4c84SqXxIY3fQ

The demonstration specifically captures images of pills moving on a conveyor belt, analyzing these visuals instantly to identify anomalies based on color differentiation. Though color is the demonstrated differentiator here, the system’s AI capabilities are flexible, capable of identifying various anomalies or contaminants that pose significant risks in pharmaceutical production and other industrial applications.

42 Technology has spearheaded the design and conceptualization of this demonstration. Leveraging their expertise in manufacturing processes, instrumentation, and systems engineering, they collaborated closely with Synaptics, using the Astra SL1680 processor at the system’s core. The Astra platform, recognized for its edge AI performance, integrates seamlessly with Bala industrial camera systems to deliver precise imaging data for analysis.

A critical partner in this collaboration is Arcturus Networks, part of Synaptics’ ecosystem. Arcturus provided essential support in developing the machine learning aspects of the project, enabling accurate anomaly detection and enhancing the overall reliability and responsiveness of the embedded AI solution.

The showcased conveyor system replicates real-world pharmaceutical manufacturing environments. Pills are automatically ejected onto a production line, where real-time image analysis is conducted. The edge AI technology quickly counts pills, identifies irregularities, and logs any contaminants, providing immediate visual feedback and timestamped records. This capability allows operators to review anomalies rapidly and take prompt corrective action.

This solution addresses a common challenge in industrial manufacturing: ensuring line clearance and eliminating safety risks associated with contaminants or leftover packaging materials. The demonstration specifically illustrates detection of anomalies such as residual pill bottles or cartons that could otherwise remain unnoticed. Timely identification of these irregularities helps maintain safety standards, regulatory compliance, and operational efficiency.

This project highlights the effectiveness of collaborative development. It combines 42 Technology’s industrial expertise and integration skills, Synaptics’ specialized edge AI hardware, Bala’s robust vision systems, and Arcturus Networks’ sophisticated machine learning algorithms. Such multidisciplinary collaboration ensures that the final system is both reliable and versatile, suitable for diverse manufacturing contexts beyond pharmaceuticals.

As showcased at Embedded World 2025, this technology represents an effective use of edge AI in enhancing industrial safety and quality assurance. Its flexibility and adaptability to various anomaly detection scenarios illustrate a practical implementation of machine learning in manufacturing, marking a notable development in embedded industrial automation.

Check out all my Embedded World 2025 videos in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Join https://www.youtube.com/charbax/join for Early Access to my videos and to support my work, or you can Click the “Super Thanks” button below the video to send a highlighted comment!

Synaptics Veros 461x at Embedded World 2025 AI-native SoC with Wi-Fi, Bluetooth, Zigbee Triple-Combo

Posted by – March 19, 2025
Category: Exclusive videos

At Embedded World 2025, Synaptics showcases its Veros brand, emphasizing reliability in wireless communication tailored for the Internet of Things (IoT). The Veros lineup integrates AI-native compute capabilities within connectivity solutions, aiming to enhance performance and efficiency across IoT applications.


Synaptics is my Embedded World 2025 video coverage sponsor, check out my Synaptics videos here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvhAbQoe9YN4c84SqXxIY3fQ

A notable addition to the Veros series is the 461x family, a triple-combo system-on-chip (SoC) that consolidates Wi-Fi, Bluetooth, and Zigbee/Thread functionalities into a single chip. This integration reduces design complexity and power consumption while maintaining robust performance across various ranges. Demonstrations showcased the chip’s capability to sustain approximately 40 megabits per second throughput even in congested environments, highlighting its efficiency and reliability.

The convergence of these three wireless technologies—traditionally implemented as separate components—into a unified SoC represents a significant advancement in IoT device design. By streamlining these functionalities, Synaptics addresses the growing demand for compact, energy-efficient, and high-performance solutions in the IoT landscape.

The Veros 461x’s AI-native compute capabilities enable contextually aware processing, allowing devices to make intelligent decisions at the edge without relying heavily on cloud services. This feature is particularly beneficial for applications requiring real-time responsiveness and enhanced security.

Synaptics’ commitment to advancing edge AI and wireless connectivity is evident in the Veros 461x family. By integrating multiple wireless protocols and AI processing into a single chip, Synaptics provides a versatile solution for developers aiming to create innovative and efficient IoT products.

Check out all my Embedded World 2025 videos in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Join https://www.youtube.com/charbax/join for Early Access to my videos and to support my work, or you can Click the “Super Thanks” button below the video to send a highlighted comment!

Renesas GreenPAK at Embedded World 2025 #ew25 Programmable Mixed-Signal ICs, SLG47105, Renesas 365

Posted by – March 19, 2025
Category: Exclusive videos

Renesas Electronics, a leader in embedded semiconductor solutions, showcased its GreenPAK™ programmable mixed-signal products at Embedded World 2025. GreenPAK devices enable engineers to integrate analog and digital functions into a single design, optimizing component count, board space, and power consumption. These cost-effective, non-volatile memory (NVM) programmable devices are ideal for applications ranging from handheld devices to automotive systems.

A key feature of GreenPAK is its integrated development environment, which allows designers to combine digital and analog blocks seamlessly. Using the GreenPAK Designer Software and development kits, custom circuits can be created and programmed in minutes, facilitating rapid prototyping and iterative design processes.

Renesas also introduced the SLG47105, a GreenPAK device equipped with digital resources and one-time programmable memory. This component is tailored for applications requiring precise digital control, such as segment displays and digital alarm clocks. Additionally, the SLG47105 features an H-bridge output, enabling efficient DC motor control with capabilities like overcurrent protection and constant speed maintenance, even under varying voltage conditions.

The versatility of GreenPAK extends to analog applications as well. For instance, the AN-O-PACK family integrates rich analog features, allowing for functions like light intensity tracking using simple sensor arrays. Other applications include DC-DC boost converters and buzzer drivers, demonstrating GreenPAK’s capability to manage power efficiently while maintaining a compact form factor.

Renesas provides comprehensive development tools to support GreenPAK designers. The GreenPAK Advanced Development Board (SLG4DVKADV) enables the development of custom designs using GreenPAK Mixed Signal ICs, allowing for rapid prototyping and testing. citeturn0search12 This platform supports a wide range of GreenPAK ICs, offering flexibility to designers working on diverse applications.

At Embedded World 2025, Renesas also announced Renesas 365, powered by Altium, a cloud-based platform designed to unify silicon, software, and hardware development. This solution aims to streamline electronics system development from silicon selection to system lifecycle management, enhancing collaboration and efficiency across engineering teams. citeturn0search1

In summary, Renesas’ GreenPAK programmable mixed-signal products and the introduction of Renesas 365 represent significant advancements in embedded system design. These innovations provide engineers with the tools to create efficient, compact, and versatile solutions across a wide range of applications, from consumer electronics to industrial automation.

Check out all my Embedded World 2025 videos in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Join https://www.youtube.com/charbax/join for Early Access to my videos and to support my work, or you can Click the “Super Thanks” button below the video to send a highlighted comment!