via IFTTT
Author:
HTEC embedded engineering: drones, robotics gripper, secure IoT with OPTIGA Trust, PSoC Edge MPPT
STMicroelectronics biosensing: ECG + seismocardiography, carotid PWV cuffless BP, vein imaging
via IFTTT
MSI EdgeXpert DGX Spark GB10 Grace Blackwell, 128GB unified memory, ConnectX clustering, Arm AI dev
via IFTTT
Nordic Semiconductor Thingy:53 gesture Edge AI on nRF5340, nRF54L15 vs nRF52 power and cores
via IFTTT
Microchip MEC175xB post-quantum EC: CNSA 2.0 secure boot, firmware verification, OTA lifecycle
via IFTTT
ISP Solutions & Devantis SA Swiss Data-Centre Infrastructure, Open-Networking & GPU-as-a-Service
via IFTTT
Hi Technologies, Barco CTRL Control Room Integration Geneva, Video Wall, Unified AV, Digital Signage
via IFTTT
CodeWrights at Embedded World 2025 #ew25 Embedded Linux Services, Cyber Resilience Act Consulting
At Embedded World 2025, CodeWrights, based in Karlsruhe, Germany, showcased their expertise in software development for measurement device manufacturers within the automation technology sector. Their services encompass the entire product development lifecycle, with a particular emphasis on embedded Linux solutions. This includes assisting clients in integrating and optimizing embedded Linux systems tailored to specific measurement devices, ensuring seamless functionality and performance.
—
Synaptics is my Embedded World 2025 video coverage sponsor, check out my Synaptics videos here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvhAbQoe9YN4c84SqXxIY3fQ
—
A significant focus at their booth was the upcoming European Union Cyber Resilience Act (CRA), which entered into force on December 10, 2024, and will be applicable from December 11, 2027. This regulation mandates stringent cybersecurity requirements for products with digital elements, aiming to enhance resilience against cyberattacks across the EU. CodeWrights offers consulting services to help device manufacturers navigate these new regulations, starting with comprehensive gap analyses to identify areas needing compliance improvements.
The company reported engaging with numerous visitors at Embedded World 2025, reflecting the industry’s keen interest in both hardware components and software services. With a team of 50 employees, predominantly software developers, CodeWrights combines technical proficiency with dedicated marketing and sales teams to deliver tailored solutions to their clients.
In addition to their service offerings, CodeWrights is actively expanding its team, seeking new members to join their embedded sector initiatives. This growth aligns with their commitment to addressing the evolving challenges in automation technology and cybersecurity compliance.
Check out all my Embedded World 2025 videos in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga
This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK
Join https://www.youtube.com/charbax/join for Early Access to my videos and to support my work, or you can Click the “Super Thanks” button below the video to send a highlighted comment!
Microchip SAMA7G54 at Embedded World 2025 #ew25 AI-Enhanced Truck Loading Bay Monitoring Demo
At Embedded World 2025, Microchip Technology showcased a truck loading bay monitoring demonstration, highlighting the capabilities of their SAMA7G54 microprocessor. This Arm Cortex-A7-based MPU, operating up to 1GHz, integrates advanced imaging and audio subsystems, including a MIPI CSI-2 camera interface, facilitating real-time object detection and machine learning applications.
—
Synaptics is my Embedded World 2025 video coverage sponsor, check out my Synaptics videos here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvhAbQoe9YN4c84SqXxIY3fQ
—
The demonstration utilized Microchip’s Ampulse toolset to develop a custom machine learning model capable of detecting trucks within the loading bay. Each detected truck was represented by a blue point on the display, turning the corresponding spot red when a truck occupied it, thereby providing an intuitive visualization of the monitoring system’s functionality.
In addition to the truck loading bay monitoring demo, Microchip’s booth featured various other interactive exhibits. Notably, a setup incorporating Lego structures and a multitude of sensors showcased the versatility and integration capabilities of Microchip’s embedded solutions in diverse applications.
The SAMA7G54 microprocessor supports up to 2GB of DDR memory and includes interfaces such as dual Ethernet ports (Gigabit and 10/100), multiple CAN-FD channels, and high-speed USB connections. These features make it suitable for industrial and automotive applications requiring robust connectivity and real-time data processing.
Microchip’s commitment to providing comprehensive development support is evident through their SAMA7G54-EK evaluation kit. This kit offers connectors and expansion headers for easy customization, facilitating rapid prototyping and integration into various embedded systems.
The integration of advanced peripherals, such as the MIPI CSI-2 camera interface, allows developers to implement low-power stereo vision applications with enhanced depth perception. This capability is particularly beneficial for applications in machine vision and automation, where accurate environmental mapping is crucial.
Microchip’s participation in Embedded World 2025 underscores their dedication to advancing embedded control solutions. By offering products like the SAMA7G54, they enable developers to create efficient, high-performance applications across various industries, from industrial automation to consumer electronics.
For more information on Microchip’s products and solutions, visit their official website: https://www.microchip.com
Check out all my Embedded World 2025 videos in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga
This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK
Join https://www.youtube.com/charbax/join for Early Access to my videos and to support my work, or you can Click the “Super Thanks” button below the video to send a highlighted comment!
EDGE AI FOUNDATION at Embedded World 2025: Advancements in Edge Computing and AI Integration
Pete Bernard, Executive Director of the EDGE AI FOUNDATION, discusses the organization’s mission and activities. The EDGE AI FOUNDATION, formerly known as the tinyML Foundation, is a global non-profit community dedicated to innovation, collaboration, advocacy, and education in energy-efficient, affordable, and scalable edge AI technologies. citeturn0search1 Their goal is to democratize edge AI technology, making it accessible and impactful for all while fostering sustainability and responsible practices.
—
Synaptics is my Embedded World 2025 video coverage sponsor, check out my Synaptics videos here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvhAbQoe9YN4c84SqXxIY3fQ
—
The foundation engages in various initiatives, including technical and business working groups that focus on best practices in areas like audio AI, neuromorphic computing, and generative AI on the edge. They collaborate with technology partners such as Qualcomm, Edge Impulse, and NXP to drive innovation in the edge AI space. citeturn0search1 Additionally, the foundation emphasizes educational efforts by developing curricula for universities, offering scholarships, and educating end-users and companies about the potential of edge AI technologies.
At Embedded World 2025, held from March 11-13 in Nuremberg, Germany, the EDGE AI FOUNDATION showcased their commitment to connecting AI to real-world applications. Their booth featured the winners of the BLUEPRINT Awards for outstanding edge AI solution deployments and organized a scavenger hunt to highlight AI’s presence in various locations around the event. They also sponsored the IoT Stars event, where Pete Bernard participated in a fireside chat, and hosted technical talks on state-of-the-art developments in edge AI.
The foundation’s rebranding from tinyML to EDGE AI FOUNDATION reflects the rapid evolution and expanding scope of edge AI technologies. This change signifies their dedication to embracing the enormous potential for edge AI in real-world applications and uniting diverse industry leaders, researchers, and practitioners to drive collective progress.
Through partnerships with academia and industry, the EDGE AI FOUNDATION aims to bridge the gap between research and practical deployment. They focus on providing resources such as high-quality datasets, models, and code to support the development of small neural networks tailored for specific tasks. This approach ensures that advancements in edge AI technology benefit society and the environment.
The foundation also emphasizes responsible AI practices, supporting efforts for sustainable and ethical AI through collaborations with NGOs and partner organizations. By fostering a diverse community and sharing knowledge, they aim to inspire breakthroughs and unlock opportunities across various industries.
Their global events, such as the upcoming EDGE AI FOUNDATION Austin 2025, provide platforms for industry experts, researchers, and enthusiasts to connect, innovate, and deploy cutting-edge edge AI technologies. These gatherings highlight how edge AI is driving agile, adaptable, and powerful solutions across various sectors.
The EDGE AI FOUNDATION continues to be a pivotal force in the edge AI community, driving innovation and collaboration to shape the future of AI at the edge. Their efforts ensure that edge AI technologies are not only advanced but also accessible and beneficial to a broad spectrum of applications and industries.
Check out all my Embedded World 2025 videos in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga
This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK
Join https://www.youtube.com/charbax/join for Early Access to my videos and to support my work, or you can Click the “Super Thanks” button below the video to send a highlighted comment!
Arcane Four TeleCANesis at Embedded World 2025: Streamlined data integration with CAN, MQTT, ZeroMQ
At Embedded World 2025, Arcane Four unveiled TeleCANesis, a tool designed to streamline data transfer across various systems and transport protocols. This solution minimizes the need for repetitive boilerplate code, enabling rapid setup of connections and ensuring seamless data flow within applications. The demonstration showcased TeleCANesis operating on a Linux system powered by the i.MX 8M Plus processor.
—
Synaptics is my Embedded World 2025 video coverage sponsor, check out my Synaptics videos here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvhAbQoe9YN4c84SqXxIY3fQ
—
TeleCANesis offers two primary tools: a web-based system architecture interface and an extension for Visual Studio Code (VS Code). The web-based tool allows system architects to design and map out data flow by creating “blueprints.” Users can drag and drop various “capsules,” each representing different system components, such as QNX capsules or Linux cloud capsules. Connectors like CAN receivers, MQTT receivers, and ZeroMQ transmitters facilitate data routing between these capsules, ensuring efficient communication across the system.
For engineers, the VS Code extension provides a more hands-on approach. By importing messages from a DBC file—commonly used in automotive contexts to define CAN bus messages—engineers can create internal representations that the TeleCANesis engine comprehends. This process involves setting up receivers and transmitters, such as a socket CAN receiver for Linux systems and a Storyboard IO transmitter for user interfaces. This integration within VS Code offers engineers a familiar environment to configure and manage data flows effectively.
Arcane Four’s experience in systems integration has led to the development of TeleCANesis, addressing the challenges of modern, complex systems that require multiple interconnected components, including cloud servers and various devices. By reducing the need for repetitive coding, TeleCANesis enhances development efficiency and ensures consistent data flow across diverse platforms.
The company’s business model centers on providing development tools like TeleCANesis, granting users access to these advanced features. Arcane Four is based in Ottawa, Canada, and continues to focus on bridging the gap between hardware and software in embedded systems.
Check out all my Embedded World 2025 videos in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga
This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK
Join https://www.youtube.com/charbax/join for Early Access to my videos and to support my work, or you can Click the “Super Thanks” button below the video to send a highlighted comment!
Clarinox at Embedded World 2025 #ew25 BLE Channel Sounding, Auracast, Wi-Fi 6 & more
At Embedded World 2025, Clarinox Technologies, headquartered in Melbourne, Australia, showcased its advanced wireless protocol stacks, including Bluetooth Low Energy (BLE), Bluetooth Classic, and Wi-Fi. The company’s Chief Technology Officer, Gokan Teri, highlighted their focus on channel sounding—a cutting-edge BLE technology for precise distance measurement and location tracking based on time-of-flight principles.
—
Synaptics is my Embedded World 2025 video coverage sponsor, check out my Synaptics videos here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvhAbQoe9YN4c84SqXxIY3fQ
—
Clarinox’s protocol stacks are designed for compatibility with various chipsets. For Bluetooth applications, they support any component adhering to the Host Controller Interface (HCI) standard. In Wi-Fi solutions, Clarinox collaborates with NXP and Texas Instruments, utilizing chipsets like NXP’s RW612 and Texas Instruments’ CC33 and 335 series, which integrate application processors with Wi-Fi 6 and BLE 5.4 capabilities.
A notable feature demonstrated was Auracast, a Bluetooth streaming technology enabling synchronized audio transmission to multiple receivers without the need for individual connections. This connectionless approach, akin to multicast streaming, is ideal for public venues, allowing users to seamlessly receive audio streams and manage personal communications, such as incoming calls, without disruption.
Clarinox emphasizes robust partnerships across operating system providers, chip manufacturers, and module producers to ensure seamless integration and performance of their wireless solutions. Their debugging tool, ClariFi, enhances development efficiency by capturing and visualizing complex scenarios, such as Wi-Fi mesh networks, and supports audio quality analysis by recording audio streams in various codecs, including the latest LC3 codec.
The company’s core offerings include licensing their Bluetooth and Wi-Fi protocol stacks, providing clients with fully functional applications that operate out-of-the-box. This approach allows customers to bypass intricate low-level configurations, expediting development timelines and reducing complexity.
Beyond their Melbourne headquarters, Clarinox maintains offices in Chennai, India, and Izmir, Turkey, reflecting their global presence and commitment to supporting a diverse client base across multiple regions.
Check out all my Embedded World 2025 videos in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga
This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK
Join https://www.youtube.com/charbax/join for Early Access to my videos and to support my work, or you can Click the “Super Thanks” button below the video to send a highlighted comment!
Imagination Technologies at Embedded World 2025: Open-Source GPU Drivers, Virtualization, AI
At Embedded World 2025, Imagination Technologies showcased a range of advancements in GPU technology and open-source initiatives. One highlight was a demonstration of a large language model (LLM) implemented by a partner on an xdx PCI Express card. This setup utilized a general-purpose GPU (GPGPU) with a compute pipeline, offering a more affordable and flexible alternative to larger GPUs in server environments.
—
Synaptics is my Embedded World 2025 video coverage sponsor, check out my Synaptics videos here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvhAbQoe9YN4c84SqXxIY3fQ
—
Another significant showcase was the introduction of Imagination’s open-source driver stack. Demonstrated on the BeaglePlay board from the Beagle Foundation, this low-cost, low-power single-board computer features Texas Instruments’ Sitara AM625 SoC, which integrates Imagination’s AXE-1-16M GPU. The open-source driver stack provides native Vulkan support, enabling translation layers like Zink to facilitate OpenGL ES compatibility.
Imagination also presented a driver monitoring system leveraging video processing to assess driver attentiveness. This system employs OpenCL compute libraries running on the GPU’s compute pipeline to execute AI software, highlighting the GPU’s versatility in safety-critical applications.
Additionally, the company showcased hardware GPU virtualization capabilities using Texas Instruments’ AM69 SoC, which houses the BXS-4-64 GPU. This demonstration featured multiple applications running on separate hardware interfaces of the GPU, known as hyperlanes. These dedicated hardware interfaces allow direct communication between the GPU and virtual machines, ensuring near-native performance without the need for software-based virtualization.
Imagination’s commitment to open-source development was further underscored by their release of open-source drivers for their PowerVR Rogue architecture GPUs. This initiative enables developers and OEMs to have greater control over their graphics software stacks, promoting flexibility and long-term support across various platforms.
The company’s engagement with the RISC-V ecosystem was also evident, as they have become a preferred choice for RISC-V SoCs. This collaboration aims to bring advanced graphics capabilities to RISC-V platforms, expanding the reach of Imagination’s GPU technologies.
Throughout the event, Imagination Technologies emphasized their focus on delivering flexible, high-performance GPU solutions tailored for embedded systems, industrial applications, and AI workloads. Their open-source initiatives and hardware virtualization features position them as a key player in the evolving landscape of embedded computing.
Check out all my Embedded World 2025 videos in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga
This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK
Join https://www.youtube.com/charbax/join for Early Access to my videos and to support my work, or you can Click the “Super Thanks” button below the video to send a highlighted comment!
Morse Micro Wi-Fi HaLow at #ew25 MM6108 and MM8102 SoCs, HaLowLink 1 Router, VT-USB-AH-8108 Dongle
At Embedded World 2025, Morse Micro showcased its advancements in Wi-Fi HaLow technology, emphasizing its potential to revolutionize IoT connectivity. Wi-Fi HaLow operates in the sub-1 GHz ISM bands, specifically 863-868 MHz in Europe and 902-928 MHz in the Americas, offering enhanced range and building penetration compared to traditional Wi-Fi frequencies. This technology leverages Wi-Fi modulation techniques to provide higher throughput than protocols like LoRa, supporting data rates up to 32.5 Mbps.
—
Synaptics is my Embedded World 2025 video coverage sponsor, check out my Synaptics videos here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvhAbQoe9YN4c84SqXxIY3fQ
—
Morse Micro’s MM6108 System-on-Chip (SoC) integrates radio, PHY, and MAC functions in compliance with the IEEE 802.11ah standard. This single-chip solution supports flexible RF interfaces, allowing for on-chip amplification or the use of external power amplifiers and front-end modules for ultra-long-reach applications. The MM6108’s efficient design ensures extended sleep times and reduced power consumption, making it ideal for battery-operated IoT devices.
The company also introduced the MM8102 Wi-Fi HaLow SoC, tailored for the European and Middle Eastern markets. Optimized for 1 MHz and 2 MHz bandwidths with 256-QAM modulation, the MM8102 achieves throughputs up to 8.7 Mbps. Operating in the sub-GHz ISM bands, it offers greater range and signal penetration than conventional Wi-Fi networks. The MM8102 complies with regional regulatory requirements, simplifying development for IoT device manufacturers.
For developers, Morse Micro offers evaluation platforms like the HaLowLink 1, a combined Wi-Fi 4 and Wi-Fi HaLow router. This platform enables easy assessment of Wi-Fi HaLow’s capabilities and facilitates integration into existing infrastructures. Additionally, the MM6108-EKH03 development platform provides seamless connectivity and supports various applications, ensuring security and reliability in IoT deployments.
Morse Micro’s technology has been integrated into products from various partners. For instance, Vantron’s VT-USB-AH-8108 Wi-Fi HaLow dongle, powered by the MM8108 chipset, delivers up to 43 Mbps connectivity and is designed for plug-and-play integration into existing systems. This collaboration highlights the growing demand for long-range, low-power connectivity solutions in the IoT ecosystem.
The company’s Wi-Fi HaLow solutions are particularly beneficial for applications requiring extended range and robust connectivity, such as industrial automation, smart metering, and perimeter security. By operating in the sub-1 GHz bands, Wi-Fi HaLow ensures reliable performance in challenging environments, making it suitable for both indoor and outdoor IoT applications.
Morse Micro’s commitment to advancing Wi-Fi HaLow technology positions it as a leader in the IoT connectivity landscape. By addressing the limitations of traditional Wi-Fi and offering scalable, energy-efficient solutions, the company is set to drive the next wave of IoT innovations across various sectors.
Check out all my Embedded World 2025 videos in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga
This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK
Join https://www.youtube.com/charbax/join for Early Access to my videos and to support my work, or you can Click the “Super Thanks” button below the video to send a highlighted comment!
Raspberry Pi at Embedded World 2025: Compute Module 5, AI Hat Plus, IMX500 Sensor, Hailo Accelerator
Raspberry Pi presented its latest Compute Module 5 (CM5) at Embedded World 2025, highlighting significant enhancements over the Compute Module 4. The CM5 offers double the performance through updated hardware and incorporates additional peripherals, providing improved flexibility for diverse applications. Raspberry Pi also displayed a dedicated Compute Module I/O board designed specifically for seamless integration with the CM5, encouraging developers to build custom I/O boards tailored to their specific requirements. More details are available at https://www.raspberrypi.com.
—
Synaptics is my Embedded World 2025 video coverage sponsor, check out my Synaptics videos here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvhAbQoe9YN4c84SqXxIY3fQ
—
In addition, Raspberry Pi showcased its advanced AI imaging solutions, prominently featuring the Sony IMX500 sensor-based AI cameras. These cameras integrate AI processing directly on-chip, enabling real-time applications such as person counting and object classification without external processing units. The demonstrated products highlighted edge-based AI acceleration, significantly optimizing resource usage and enhancing privacy by processing data locally.
Another key feature introduced was the AI Hat Plus equipped with a Hailo AI accelerator, specifically compatible with the Raspberry Pi 5. This solution significantly enhances machine-learning capabilities, including facial recognition, and can utilize various camera inputs, ranging from official Raspberry Pi camera modules to standard USB webcams. The demonstration at the booth included live facial recognition with instant identification using locally trained data.
The Raspberry Pi booth further emphasized real-world applications, featuring various partner implementations utilizing Raspberry Pi hardware for industrial and commercial environments. These included robust industrial solutions like the Revolution Pi, known for its reliability in demanding conditions. Additionally, practical deployments such as digital signage systems demonstrated Raspberry Pi’s adaptability and widespread acceptance in diverse industry segments.
Despite a technical issue preventing live demonstrations, Raspberry Pi also planned to highlight their microcontroller offerings. These microcontrollers aim to complement their existing product lineup by addressing low-power, cost-effective applications where full-fledged computers would be excessive. Raspberry Pi continues its mission of enabling efficient solutions across embedded computing scenarios, from edge computing to complex AI-driven tasks.
The integration of Sony IMX500 image sensors into Raspberry Pi’s AI products underscores their push towards embedded vision applications, particularly for edge inference. The IMX500 cameras execute both image sensing and AI inference entirely on-chip, enabling efficient applications such as smart surveillance, retail analytics, and automated person counting without external processing overhead.
Industrial partners also prominently featured Raspberry Pi-based solutions at the event, reflecting the ecosystem’s maturity and Raspberry Pi’s strong foothold in industrial contexts. These partnerships underline a significant shift toward deploying Raspberry Pi hardware beyond educational and hobbyist sectors, emphasizing robust, scalable industrial applications where reliability, affordability, and flexibility are paramount.
Overall, Raspberry Pi’s presentation at Embedded World 2025 emphasized its strategic focus on AI acceleration, performance improvement, and versatility, aiming to extend the capabilities and appeal of their hardware solutions across professional and industrial applications. The introduction of specialized hardware like the Hailo accelerator alongside the Compute Module 5 reflects Raspberry Pi’s continued evolution to meet increasingly complex technical demands.
Check out all my Embedded World 2025 videos in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga
This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK
Join https://www.youtube.com/charbax/join for Early Access to my videos and to support my work, or you can Click the “Super Thanks” button below the video to send a highlighted comment!
Seeed Studio at Embedded World 2025: Edge Computing, AIoT Solutions, reComputer Series, BeaglePlay
At Embedded World 2025, Seeed Studio showcased its latest advancements in edge computing and AIoT solutions. A notable highlight was their chatbot device, capable of processing natural language queries and delivering responses both audibly and visually. This integration leverages cloud connectivity and large language models to facilitate seamless human-computer interactions.
—
Synaptics is my Embedded World 2025 video coverage sponsor, check out my Synaptics videos here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvhAbQoe9YN4c84SqXxIY3fQ
—
Another prominent exhibit was the manipulation robotic arm. Although a technical issue prevented a live demonstration, the arm is designed to mimic human gestures through manual training via a handling bar. This approach allows the robotic arm to learn and replicate complex movements, enhancing its utility in automation tasks.
Seeed Studio also introduced the reComputer series, powered by NVIDIA Jetson platforms. These edge computing devices are tailored for AI applications, offering robust processing capabilities in a compact form factor. They serve as versatile solutions for developers aiming to deploy machine learning models at the edge, addressing the growing demand for localized data processing.
In collaboration with BeagleBoard.org and Texas Instruments, Seeed Studio has developed the BeaglePlay and BeagleV boards. These platforms cater to the open-source community, providing flexible hardware solutions for various applications, from educational tools to industrial projects. The BeagleV, in particular, features a RISC-V architecture, reflecting the industry’s shift towards open instruction set computing.
The XIAO series, known for its compact design, was also on display. These microcontroller units, based on chipsets like the RP2040 and ESP32, offer a balance between size and functionality. They are ideal for wearable devices, DIY keyboards, and other projects where space is a constraint but performance remains crucial.
Seeed Studio’s Grove ecosystem continues to expand, offering a modular approach to sensor integration. This plug-and-play system simplifies the process of adding sensors to projects, making it accessible for both beginners and seasoned developers. The ecosystem supports a wide range of sensors, from environmental monitoring to motion detection.
During the event, Seeed Studio engaged with numerous system integrators and distributors, reflecting its commitment to collaboration and community building. These interactions underscore the company’s role in fostering innovation through partnerships and knowledge sharing.
For more information on Seeed Studio’s products and services, visit their official website: https://www.seeedstudio.com/
Check out all my Embedded World 2025 videos in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga
This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK
Join https://www.youtube.com/charbax/join for Early Access to my videos and to support my work, or you can Click the “Super Thanks” button below the video to send a highlighted comment!
Synaptics LE Audio at Embedded World 2025: 4382 Triple Combo Chip, Bluetooth, LC3 Codec, Thread IoT
Synaptics demonstrated their latest Bluetooth connectivity solution, the 4382 triple combo chip, featuring Wi-Fi, Bluetooth, and Thread capabilities. This demo particularly highlighted Bluetooth LE Audio, showcasing the chip’s capacity to handle multiple audio streams simultaneously, each in different languages. Such capability enables users to select personalized audio tracks directly through their Bluetooth LE Audio-compatible headsets, making it suitable for multilingual media consumption. Synaptics provides more information about their technology at https://www.synaptics.com.
—
Synaptics is my Embedded World 2025 video coverage sponsor, check out my Synaptics videos here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvhAbQoe9YN4c84SqXxIY3fQ
—
The practical demonstration showed content streamed to a headset in real-time, instantly switching languages upon use. An example given was a viewer watching a TV show or movie originally in English, who could choose to hear Spanish or German audio instead. This technology is particularly useful in scenarios like hosting international guests, allowing viewers to enjoy localized audio content without relying on subtitles.
Beyond entertainment, Bluetooth LE Audio has significant implications for gaming. Modern gaming consoles increasingly integrate social audio streams alongside gameplay audio, allowing gamers to hear in-game sounds and simultaneously communicate clearly with remote teammates. LE Audio supports this dual audio streaming with low latency, enhancing both interactivity and immersive gameplay experiences.
Additionally, LE Audio technology can be utilized in digital TVs and set-top boxes, expanding the possibilities for home multimedia setups. Synaptics emphasized that the technology is designed to meet the demands of high-end multimedia applications, enabling seamless synchronization of audio and video content across multiple devices. This integration ensures clear audio transmission, efficient bandwidth usage, and reduced latency.
The Synaptics 4382 chipset’s support for the Thread protocol also positions it favorably within smart home ecosystems. Thread offers a reliable, energy-efficient mesh network for IoT devices, complementing the connectivity provided by Wi-Fi and Bluetooth. This enables broader integration possibilities beyond multimedia, potentially extending into comprehensive smart-home solutions.
Bluetooth LE Audio introduces key advancements like the LC3 codec, which significantly improves audio quality at lower bit rates compared to traditional SBC codecs. This translates into better-sounding audio, extended battery life on wireless devices, and broader compatibility across numerous consumer electronics. With these technical advantages, LE Audio is poised to become the new industry standard for wireless audio transmission.
In summary, Synaptics is actively advancing the adoption of Bluetooth LE Audio through versatile solutions such as their 4382 triple combo chip. By enabling personalized audio streams in various languages, low-latency gaming audio, and high-quality multimedia experiences, LE Audio significantly upgrades how consumers interact with wireless audio technologies. As multimedia demands grow, such innovations will become essential in consumer electronics.
Check out all my Embedded World 2025 videos in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga
This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK
Join https://www.youtube.com/charbax/join for Early Access to my videos and to support my work, or you can Click the “Super Thanks” button below the video to send a highlighted comment!
Synaptics Bluetooth Channel Sounding #ew25 Proximity Security Smart Locks Automotive Automation
Anand Roy, Senior Product Line Manager at Synaptics, demonstrates Bluetooth Channel Sounding technology, which accurately measures distance based on packet exchanges between Bluetooth-enabled devices. This precise measurement triggers actions such as locking or unlocking devices depending on proximity. Synaptics showcases a practical demonstration where the screen locks automatically at distances beyond a predefined threshold (1.8 meters), and unlocks as the user moves closer again. For more details on Synaptics’ technology, visit https://www.synaptics.com.
—
Synaptics is my Embedded World 2025 video coverage sponsor, check out my Synaptics videos here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvhAbQoe9YN4c84SqXxIY3fQ
—
Bluetooth Channel Sounding operates by measuring the time and characteristics of Bluetooth signal packets exchanged between two paired devices, allowing highly accurate estimation of physical distance. This technology has notable implications for convenience, security, and automation. In practice, it enables use-cases like automated device unlocking without manual intervention, significantly streamlining user experiences in various everyday scenarios.
One of the most immediate applications demonstrated involves automatic locking and unlocking of screens. By setting distance thresholds, the system reacts dynamically to the user’s proximity, locking the screen when the paired device moves beyond the defined range, and unlocking when it returns within proximity. The demonstration showed the unlocking at around 1 meter (approximately 3 feet), locking when beyond 1.8 meters, and reliably unlocking again upon returning.
The potential for Bluetooth Channel Sounding extends into broader consumer markets, particularly home security and automation. Smart locks equipped with this technology can automatically detect the homeowner’s smartphone, unlocking doors without manual input, and conversely securing the premises automatically as they depart. This provides convenience along with enhanced security, preventing unauthorized access without the appropriate device authentication.
Automotive industries also represent a major sector for Bluetooth Channel Sounding adoption. Modern keyless entry systems rely on proximity detection to unlock vehicles automatically as owners approach, while remaining secured against unauthorized access attempts. Synaptics’ implementation promises improvements in the accuracy and reliability of these systems, potentially replacing or supplementing existing approaches that rely primarily on traditional RFID or proximity sensors.
Another significant advantage of Synaptics’ Bluetooth Channel Sounding technology is its robustness against interference. Unlike systems requiring direct line-of-sight, Bluetooth-based channel sounding remains effective even in environments where devices do not have an unobstructed path to each other. Thus, it performs reliably across varied conditions typical in residential or automotive contexts.
Overall, Bluetooth Channel Sounding technology from Synaptics addresses the increasing market demand for automated, secure, and user-friendly proximity-based interactions. The precise and reliable distance measurement enhances user convenience in numerous applications, from personal devices to home security and automotive entry systems. This technology exemplifies how accurate proximity sensing is becoming integral to smart connectivity ecosystems, providing both seamless user experiences and enhanced security capabilities.
Check out all my Embedded World 2025 videos in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga
This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK
Join https://www.youtube.com/charbax/join for Early Access to my videos and to support my work, or you can Click the “Super Thanks” button below the video to send a highlighted comment!
Grinn AstraSOM-1680 SoM with Synaptics SL1680 processor at Embedded World 2025 #ew25
At Embedded World 2025, Grinn unveiled their latest innovation: the AstraSOM-1680, a system-on-module (SoM) built around Synaptics’ Astra SL1680 processor. This processor features a quad-core Arm Cortex-A73 CPU and an 8 TOPS neural processing unit (NPU), designed to deliver high-performance edge AI capabilities. citeturn0search1 Grinn’s demonstration showcased the module’s ability to recognize various radio frequency (RF) modulations in real-time, highlighting its potential in edge computing applications.
—
Synaptics is my Embedded World 2025 video coverage sponsor, check out my Synaptics videos here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvhAbQoe9YN4c84SqXxIY3fQ
—
The AstraSOM-1680 integrates the SL1680 processor’s capabilities, including the PowerVR Series9XE GPU, 4GB of LPDDR4 memory, and 16GB of eMMC storage. citeturn0search1 This configuration ensures efficient handling of complex video inputs and outputs, making it suitable for real-time image processing and AI-driven tasks. The module’s architecture allows for the simultaneous operation of the NPU and CPU cores, ensuring that visual processing tasks do not hinder other system functions.
Grinn’s approach to edge AI emphasizes the integration of advanced processing capabilities directly within devices, reducing latency and enhancing data privacy by minimizing reliance on cloud-based computations. By treating cameras and sensors as intelligent entities, Grinn aims to unlock new applications across various industries. The company’s expertise in designing and producing these modules in Europe, specifically in Poland, ensures adherence to high-quality manufacturing standards.
With 17 years in the market, Grinn has established itself as a reliable partner for clients worldwide. Their team of engineers is dedicated to assisting customers in translating innovative ideas into market-ready products, leveraging the capabilities of modules like the AstraSOM-1680 to accelerate development cycles and improve efficiency.
Check out all my Embedded World 2025 videos in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga
This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK
Join https://www.youtube.com/charbax/join for Early Access to my videos and to support my work, or you can Click the “Super Thanks” button below the video to send a highlighted comment!
Synaptics at #ew25 shows Wi-Fi AI Sensing, breathing motion detection, CSI radar ultra wideband IoT
Ananda Roy, Senior Product Line Manager at Synaptics (https://www.synaptics.com), demonstrated their latest AI-enhanced Wi-Fi sensing technology at Embedded World 2025. Utilizing their compact Wi-Fi chip, the 43752, integrated into the Astra Machina development kit, Synaptics showcased a method of environmental sensing through Wi-Fi signal deflection. The chip operates by exchanging packets with a router, interpreting signal reflections to detect the presence and movements of people within a room.
—
Synaptics is my Embedded World 2025 video coverage sponsor, check out my Synaptics videos here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvhAbQoe9YN4c84SqXxIY3fQ
—
The technology relies on analyzing Channel State Information (CSI) from Wi-Fi signals, creating graphical representations that AI algorithms interpret. Synaptics has developed machine learning models capable of identifying subtle indicators, such as human breathing, which appear as regular, periodic wave patterns. Larger movements produce distinctly wider patterns, allowing clear differentiation between stationary presence and active motion.
Ananda explained that the AI model operates with defined confidence thresholds to accurately distinguish between various activities, ensuring reliable detections. Currently, the model is finely tuned to differentiate between stationary breathing and significant movements, but its capabilities extend beyond this initial scope. Through additional training, the AI can identify specific situations, such as falls, multiple occupants, or unique motion signatures, significantly broadening its potential applications in security, healthcare, and smart homes.
A key advantage highlighted by Ananda is the absence of additional hardware requirements. Since Wi-Fi is already standard in most IoT devices, Synaptics’ technology eliminates the need for supplementary sensors like radar or Ultra Wideband (UWB), enabling cost-effective and seamless integration. Devices equipped with existing Wi-Fi capabilities can immediately utilize this advanced sensing feature.
This approach leverages Wi-Fi’s widespread availability, making it practical for diverse environments. Synaptics’ system effectively uses environmental RF signal reflections, providing a non-invasive and privacy-friendly alternative to cameras or intrusive sensors. By harnessing inherent wireless infrastructure, it becomes possible to enhance existing IoT products, offering scalable sensing capabilities without extensive modifications.
Synaptics’ implementation demonstrates how AI-driven interpretations of Wi-Fi signals can significantly expand traditional connectivity roles, transforming passive network hardware into active sensing infrastructure. This technology presents valuable use cases, such as occupancy detection, elder care monitoring, fall detection, and home automation, demonstrating a notable step forward in smart environment interactions.
The demonstration at Embedded World 2025 showcased not only technical feasibility but practical versatility, emphasizing Synaptics’ approach toward integrated, minimally intrusive environmental sensing solutions. As the AI model evolves, Anand suggested it could be further trained to recognize specific patterns, such as counting occupants or detecting emergency events, enhancing its adaptability to diverse real-world scenarios.
Synaptics’ demonstration emphasized the shift from specialized, single-purpose sensors toward multifunctional use of commonplace technologies, highlighting the potential of Wi-Fi as a robust sensing medium. This advancement supports the integration of more intelligent, responsive environments without significant hardware complexity or cost overhead, representing a notable development for the IoT and connected device industries.
Check out all my Embedded World 2025 videos in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga
This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK
Join https://www.youtube.com/charbax/join for Early Access to my videos and to support my work, or you can Click the “Super Thanks” button below the video to send a highlighted comment!



