Innatera Pulsar neuromorphic MCU, SNN edge AI, radar presence sensing and audio classification

Posted by – March 17, 2026
Category: Exclusive videos

Innatera is positioning neuromorphic computing as a practical way to run always-on sensor AI without the usual power penalty. In this interview, the company explains how its Pulsar chip combines spiking neural networks, a RISC-V microcontroller, and a CNN accelerator in a single sensor-edge device, so pattern recognition can happen continuously where data is created rather than being pushed to a larger processor or the cloud. That makes the discussion less about raw TOPS marketing and more about system-level efficiency, latency, and battery life. https://innatera.com/pulsar

The key idea is that Pulsar uses silicon neurons and synapses across digital and analog spiking fabric to process sensory events in a brain-inspired way. Instead of treating AI as a separate block bolted onto a conventional embedded design, Innatera presents neuromorphic inference as part of the whole SoC architecture. The result is a platform aimed at sub-millisecond reaction time, low data movement, and ultra-low-power operation for audio, radar, vibration, and other continuous sensor streams at the edge.

What makes the video interesting is that the story quickly moves from architecture to concrete product categories. The live demos include real-time audio classification, audio scene recognition for adaptive headphones, radar-based human presence detection, and predictive maintenance based on vibration sensing. These are all workloads where conventional embedded AI often struggles with the tradeoff between accuracy and always-on operation. Innatera’s claim is that spiking neural networks can keep sensing active full time while staying inside the power budget of compact battery-powered devices.

There is also a strong ambient intelligence theme running through the interview. A notable example is the radar-based human presence detector developed with Socionext, targeting extremely low-power detection for devices such as smart doorbells. Another is the intelligent smoke detector described here, which adds classification and occupancy awareness rather than acting as a simple threshold alarm. Filmed at Embedded World 2026 in Nuremberg, the demo set gives a useful snapshot of where neuromorphic edge AI is heading: not as a research novelty, but as embedded silicon for smart home, industrial IoT, wearables, and safety systems alike.

The company background matters too. Innatera spun out of Delft University of Technology in 2018 after years of research into brain-inspired and energy-efficient computing, and the interview frames Pulsar as the point where that research becomes production silicon. That matters because the value proposition is not generic AI acceleration, but embedded pattern recognition that can stay on continuously in the field. For engineers building sensor-rich products, this is really a discussion about edge inference architecture, mixed-signal design, SNN deployment, and how to reduce power, latency, and bandwidth all at the same time.

source https://www.youtube.com/watch?v=jAM-sgLlmrg

Bosch Rexroth ctrlX OS on AMD: Secure Industrial Control, Soft PLC, Node-RED, Edge AI

Posted by – March 17, 2026
Category: Exclusive videos

Bosch Rexroth is positioning ctrlX OS as a hardware-independent industrial Linux platform for software-defined automation, where the same application stack can move across controllers, IPCs, edge systems and virtual environments. In this interview, the focus is on secure industrial control, app-based deployment, and a common runtime that lets developers build once and roll out across multiple device classes with far less integration work. https://www.ctrlx-os.com/

The demo shows how ctrlX OS can host different control approaches on the same data layer, from a soft PLC to Node-RED, while exposing machine states and digital I/O through a unified interface. That matters because industrial edge systems increasingly mix classic control logic, visualization, protocol handling, and data services on one platform rather than splitting them across isolated boxes.

A key theme here is the broader hardware reach created by Bosch Rexroth’s work with AMD. The transcript points to support for CPU, GPU and MPU resources, which fits the current push toward x86 embedded processors and adaptive SoC platforms for edge compute. For developers building process-hungry workloads, that opens the door to more demanding HMI, analytics and edge AI pipelines without changing the operating-system layer or rewriting the deployment model.

Security and lifecycle management are just as central as performance. ctrlX OS is presented here as CRA-ready and aligned with IEC 62443-4-2 Security Level 2 expectations, while also giving access to the practical features engineers actually need in the field: backup and restore, reset, license management, app installation, and centralized access to every exposed data point. The result is less about a single controller and more about a secure, manageable OT software platform.

What makes the story interesting is the developer angle. Bosch Rexroth is clearly pushing an API-driven model where the same functions available in the web UI can also be automated through REST APIs, virtual controllers, SDK tooling, and reusable apps. Filmed at Embedded World 2026 in Nuremberg, this interview captures a broader transition in industrial automation: PLC logic, low-code tools, edge AI acceleration, and secure app deployment are starting to converge into one programmable software stack.

source https://www.youtube.com/watch?v=zIA8jK-tkFE

Tianma display roadmap: glass-free 3D, Mini-LED, transparent Micro-LED and HUD

Posted by – March 17, 2026
Category: Exclusive videos

Tianma’s display portfolio here is less about a single panel and more about how the company is packaging complete HMI platforms for industrial, medical, transport and automotive use. The interview moves from a 23.8-inch 4K2K industrial display to integrated systems where Tianma supplies not just the LCD or OLED, but also electronics, compute boards and enclosure design. That matters for OEMs building camera monitors, control terminals or specialized vision devices, because the value shifts from raw panel supply to full module integration, long-life support and design-in flexibility. https://www.tianma.eu/

A big theme in the booth tour is optical engineering for difficult environments. Tianma shows glass-free 3D with eye tracking, allowing a split between 2D UI and 3D visualization, which fits medical imaging and other workflows where depth cues matter but operators still need conventional data overlays. Mini-LED backlighting with local dimming is another clear focus, improving black levels and contrast for medical and inflight display use, while reflective display technology targets outdoor readability with far lower power draw than a conventional transmissive panel.

The industrial side is paired with application-specific hardware concepts, including a rugged professional tablet style monitor for camera and vision systems. What stands out is the combination of Tianma’s core display technologies with embedded electronics, suggesting a path from display component to near-finished device. The transcript also points to Rockchip-based electronics in the demo hardware, which reinforces the idea that Tianma is not just talking about panel specs, but about complete embedded display subsystems tuned for field use, sunlight readability and power efficiency.

On the automotive side, the most interesting pieces are transparent Micro-LED, long-shape Micro-LED formats and a Micro-LED source for head-up display architecture. That lines up with Tianma’s broader recent push into automotive Micro-LED and HUD concepts, including very high brightness projection-oriented displays and transparent surfaces that can turn glass areas into information layers. In that context, the booth demo feels like an extension of a wider strategy around smart cockpit display architecture, where LTPS LCD, AMOLED and Micro-LED each serve different HMI roles rather than competing as one universal technology.

Later in the video, filmed at Embedded World 2026 in Nuremberg, the broader message becomes clear: Tianma is positioning itself as a global display engineering partner with in-house coverage across TFT-LCD, LTPS, AMOLED, Mini-LED and Micro-LED, backed by manufacturing scale in Asia and regional support for European customers. The result is a story about display roadmaps, integration capability and application fit, from smartphones to digital signage to transportation and automotive cockpits, rather than a simple product launch.

source https://www.youtube.com/watch?v=r51NNAA56PY

Axelera Metis 214 TOPS and Europa Edge AI 629 TOPS: 8K Vision, RISC-V, Robotics, SLM, PCIe/M.2

Posted by – March 17, 2026
Category: Exclusive videos

Axelera positions itself as a European edge AI alternative focused on inference rather than training, and this interview makes that distinction clear. The main story is performance per watt: the company’s Metis platform is presented as delivering 214 TOPS at around 6W typical power, in compact M.2 and PCIe form factors that let developers add AI acceleration to existing x86 or Arm systems without redesigning the whole box. https://axelera.ai/

What stands out in the demo lineup is how practical the workloads are. Instead of benchmark theatre, the booth focuses on edge deployments such as native 8K video analytics, retail loss prevention, container inspection for rust and damage, and autonomous robotics. The point is not just raw throughput, but being able to process high resolution video streams and multiple models at the edge where thermal limits, latency, bandwidth, and total system cost matter more than in cloud-first AI.

The technical angle is also stronger than a typical trade-show pitch. Axelera describes Metis as combining digital in-memory computing for matrix-vector multiplication with a RISC-V based orchestration layer across four AI cores, which allows parallel or cascaded model execution. That architecture fits the current edge AI mix well: computer vision pipelines, multimodel workloads, and lighter generative AI tasks such as speech interfaces and small language models, rather than full-scale training or oversized server-class LLM deployments.

The roadmap matters just as much as the current chip. In the interview, Axelera points to Europa as the next step for premium edge systems, robotics, VLM-style contextual understanding, and larger language models beyond the current memory envelope. That lines up with the company’s broader push this year around Metis and Europa, its Voyager SDK toolchain, and ecosystem work that makes model conversion and deployment easier for developers moving from FP32 training environments to efficient edge inference.

Filmed at Embedded World 2026 in Nuremberg, this conversation shows why Axelera is getting attention in European semiconductor and edge AI circles: not because it claims to replace GPU training infrastructure, but because it targets the part of the stack where many industrial systems actually live. Low-power inference, compact accelerators, RISC-V control, DDR5-backed memory bandwidth, and deployable computer vision pipelines are the core themes here, with Europe’s supply-chain and sovereignty angle sitting in the background rather than dominating the pitch.

All my Embedded World videos are in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

source https://www.youtube.com/watch?v=iJrwV9zM53A

ProvenRun ProvenCore EAL7, Automotive Ethernet Protocol Break, Formal OS, ProvenHSM, STM32H5, PQC

Posted by – March 16, 2026
Category: Exclusive videos

ProvenRun is making a case for embedded security that starts below the application layer, with a mathematically verified trusted base rather than another add-on middleware stack. In this interview, the company explains how ProvenCore, its formally proven secure OS and TEE, is used to build high-assurance systems for automotive, avionics, defense, microcontrollers and cloud security, with the goal of reducing attack surface, simplifying certification and keeping long lifecycle products maintainable. https://provenrun.com/

A big part of the discussion is the shift to software-defined vehicles and zonal automotive Ethernet. ProvenRun’s protocol-break approach fully deconstructs and reconstructs traffic between exposed domains and safety-critical zones, rather than relying only on segmentation. That matters for in-vehicle infotainment, connectivity modems and ADAS paths, where 1GbE and faster links now carry far more critical traffic than older in-car networks ever did.

The technical differentiator is formal methods. ProvenRun says ProvenCore remains the only operating system certified at Common Criteria EAL7, and that foundation is then reused for trusted applications such as secure storage, cryptography, PKCS#11, VPN, network stacks, secure firmware update and protocol filtering. The company also highlights compatibility with standard embedded security ecosystems including GlobalPlatform, PSA-style APIs, Android trusted applications and post-quantum cryptography work with CryptoNext.

The interview also touches the microcontroller side, where ProvenCore-M is positioned as a secure RTOS and TEE for Arm v8-M class devices, including ST deployments around STM32 security architectures. That gives developers a pre-certified route to TrustZone-based isolation, secure services and easier product evaluation without having to design every security primitive from scratch. Filmed at Embedded World 2026 in Nuremberg, the demo shows how that same security-by-design philosophy is now being stretched from MCU roots into automotive gateways and trusted edge compute.

On the cloud side, ProvenRun is pushing ProvenHSM and ProvenBox as remotely manageable hardware-backed trust anchors for key management, crypto services and customizable secure applications. The interesting angle is not just HSM throughput, but compositional certification, cloud-native administration, FPGA-assisted crypto acceleration and a roadmap that includes PQC readiness. Overall, this is a useful look at how embedded cybersecurity is moving toward verifiable isolation, certifiable trusted execution and longer-term lifecycle assurance across both edge and data center scale.

All my Embedded World videos are in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

source https://www.youtube.com/watch?v=Cmz3ENmAYPs

eSOL eMCOS POSIX RTOS, ROS Middleware, Multicore ARM Cortex and RISC-V Embedded Full Stack

Posted by – March 16, 2026
Category: Exclusive videos

eSOL positions itself as a full-stack embedded software partner rather than a vendor selling only one RTOS layer. The core message in this interview is integration: a production-ready platform that combines the eMCOS real-time operating system, a POSIX-compliant profile, middleware for networking and robotics-oriented workflows, plus engineering services that extend from bring-up to certification. That matters for teams trying to reduce supplier fragmentation and keep one accountable path from hardware integration to deployed code. https://www.esol.com/

A key theme is the gap between prototype software and certifiable production systems. The demo points to ROS and model-based toolchains as part of the ecosystem, but the argument from eSOL is that open robotics frameworks alone are not always enough once determinism, safety, and real-time behavior become mandatory. In that context, eMCOS POSIX is presented as a way to preserve familiar POSIX development models while moving toward tighter scheduling control, certification targets, and system-level integration for embedded products.

What makes the platform interesting technically is scalability across compute classes. In the demo, the same runtime approach spans ARM Cortex-M, ARM Cortex-R, ARM Cortex-A and also RISC-V, reflecting eSOL’s long-standing focus on multi-core and many-core embedded architectures. That gives the interview a broader angle than a simple RTOS pitch: it is really about one software foundation that can move from small microcontrollers to larger heterogeneous SoCs without forcing a complete tooling reset or a redesign of the application stack at every step.

Recent eSOL direction adds useful context to what is shown here. The company has been expanding its Full Stack Engineering model in Europe, and its eMCOS POSIX profile gained ISO 26262 ASIL D compliance in 2025, which reinforces the interview’s emphasis on automotive-grade real-time software. eSOL has also been showing eMCOS in software-defined vehicle workflows, including virtual-platform work around Renesas R-Car, so the message here fits a wider industry push toward software-first development, safety partitioning, and faster validation at scale.

Overall, this is less about Linux replacement rhetoric and more about where a deterministic POSIX RTOS fits when embedded teams need predictable latency, certification support, multicore scaling, and one engineering interface across the stack. The interview was filmed at Embedded World 2026 in Nuremberg, and it frames eSOL as a company targeting automotive, robotics, industrial and medical designs where middleware compatibility, long-term support, and integration ownership are often worth as much as raw kernel features in practice here.

All my Embedded World videos are in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

source https://www.youtube.com/watch?v=iEaaI6PVweQ

ATGBICS at Embedded World 2026: Compatible Transceivers, Legacy Optics, 800G QSFP, DAC and AOC

Posted by – March 16, 2026
Category: Exclusive videos

ATGBICS is positioning itself as a practical supplier for industrial network connectivity rather than just another optics reseller. The main story here is compatibility at scale: transceivers for more than 300 vendor ecosystems, support for legacy and current modules, and a business model built around keeping networks running when original OEM parts have gone end-of-life. That matters in embedded and industrial systems where redesigning around a discontinued optical part can be far more expensive than the module itself. https://atgbics.com/

A big part of the discussion is obsolescence management. ATGBICS describes a process where the bill of materials is locked, prototype samples can be validated against a customer’s hardware, and repeat orders can be built with the same chipset, laser, and configuration that was previously qualified. For industrial Ethernet, long-lived automation platforms, transport systems, and ruggedized infrastructure, that kind of traceability can be more important than chasing the newest data rate.

The interview also makes clear that this is not only about old through-hole optics from the 1990s. The portfolio shown moves from 1×9 and 2×5 legacy transceivers to 1G and 10G workhorse SFP-class modules, then all the way up to high-bandwidth QSFP and direct attach cable options used in data center and AI networking. The interesting angle is that the same company is covering both ends of the market: replacement parts for installed industrial gear and compatible modules for newer high-density switching environments.

What gives the video some depth is the manufacturing and customization side. ATGBICS talks about working with factory partners in Taiwan and China, offering certificates of conformity, custom firmware, private labeling, and barcode-level branding for OEMs building their own switch, router, or PoE product lines. Filmed at Embedded World 2026 in Nuremberg, the interview shows how optical connectivity is increasingly tied to supply-chain resilience, second-source qualification, and lifecycle planning, not just raw bandwidth.

The result is a useful look at a part of embedded infrastructure that usually stays in the background. Instead of focusing on headline silicon, this conversation is about pluggable optics, DACs, AOCs, OEM-compatible coding, industrial temperature requirements, and the economics of keeping deployed systems alive for years longer than the original vendor may support. That makes the video relevant for engineers, sourcing teams, EMS partners, and network equipment makers dealing with both legacy maintenance and forward migration.

All my Embedded World videos are in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

source https://www.youtube.com/watch?v=h5iWToDbxh4

Weebit Nano ReRAM for Edge AI, Embedded NVM, Near-Memory Compute and SoC Integration

Posted by – March 16, 2026
Category: Exclusive videos

Weebit Nano is positioning ReRAM as an embedded non-volatile memory alternative to flash for SoCs that need faster writes, lower power, better endurance, and easier scaling below 28 nm. In this interview, CEO Coby Hanoch explains why the company focuses on embedded NVM rather than bulk storage: the target is firmware, security keys, calibration data, AI coefficients, and instant-on system behavior integrated directly on the same die as compute and control logic. https://www.weebit-nano.com/

The key technical point is that Weebit’s ReRAM is a back-end-of-line technology, built between metal layers rather than in the silicon substrate. That matters for mixed-signal and analog-heavy designs, because it avoids many of the layout and process compromises associated with embedded flash. Hanoch describes the cell in simple terms: voltage moves ions to form or break a conductive path, switching between low and high resistance states that represent stored data.

For edge AI, the pitch is especially clear. If model coefficients can live in embedded non-volatile memory on the AI chip, designers can avoid a separate external flash device, reduce board cost, shorten boot time, cut power draw, and remove a security exposure created when weights are copied at startup. That fits near-memory compute, and it also points toward in-memory compute, where analog-style ReRAM arrays may eventually support more efficient AI inference for gesture recognition, sensor workloads, and always-on edge devices.

The interview also shows why this matters beyond AI. Embedded ReRAM is relevant for power management ICs, MCUs, IoT nodes, automotive electronics, and aerospace-oriented designs that need retention without power, robust endurance, and tolerance for harsh conditions. Weebit highlights qualification work for automotive temperature ranges, radiation immunity as a useful characteristic, and the benefit of integrating memory without disturbing the optimal analog portion of a chip.

Filmed at Embedded World 2026 in Nuremberg, the discussion captures a memory company moving from R&D into commercialization. Weebit already talks about customers such as onsemi and Texas Instruments, growing capacity targets in the embedded range, and a roadmap that connects embedded NVM with future AI architectures. The result is not “more storage” in the consumer sense, but a more integrated memory block for edge silicon where power, cost, area, boot latency, and security all matter at once.

All my Embedded World videos are in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

source https://www.youtube.com/watch?v=dn82VxEX4aI

Tektronix IsoVu TIVP, TICP and 7 Series DPO for SiC, GaN and power integrity

Posted by – March 16, 2026
Category: Exclusive videos

Tektronix focuses here on one of the harder measurement problems in modern power electronics: capturing fast, high-voltage switching behavior without corrupting the waveform through probe loading, ground noise, or isolation limits. The interview centers on the second-generation IsoVu isolated voltage probe, where optical power delivery over glass fiber lets the probe head stay electrically isolated while still measuring very small and very fast events. That matters for SiC and GaN power stages, where dv/dt, common-mode noise, and switching transients quickly expose the limits of conventional probing. https://www.tektronix.com/

A key point in the demo is flexibility at the probe tip. The discussion mentions interchangeable tips spanning low-voltage work up to kilovolt-class measurements, which fits the broader need to move between gate-drive, shunt, switch-node, and bus measurements without rebuilding the whole setup. Tektronix also highlights its isolated current probing, including an RF link architecture with no direct physical connection inside the probe path, aimed at very high common-mode rejection. In practice, this is the kind of tooling engineers need for double-pulse test setups, power integrity analysis, wide-bandgap converter design, and validation of fast-switching inverter stages. ([tektronix.com][1])

What makes the video interesting is that it is less about headline specs and more about measurement credibility. The screen demo compares a reference voltage with current captured through the isolated current probe, showing how Tektronix is positioning these probes as part of a complete power integrity workflow rather than as standalone accessories. That fits a broader shift in lab instrumentation, where probe architecture, tip ecosystem, connection standards, and noise rejection are becoming just as important as oscilloscope bandwidth. The clip was filmed at Embedded World 2026 in Nuremberg, where this kind of test and measurement detail is especially relevant for embedded power, automotive, industrial control, and energy conversion teams.

The booth tour also briefly points to Tektronix’s wider high-speed instrumentation stack, including the 7 Series DPO at up to 25 GHz and 125 GS/s, plus the DPO70000SX platform, which Tektronix lists up to 70 GHz and 200 GS/s for very high-speed serial, PCIe, memory, and signal-integrity work. So the story here is really two layers of debug: precision isolated probing for power devices such as SiC and GaN MOSFETs, and high-bandwidth scope platforms for the digital and interconnect side of the same system.

source https://www.youtube.com/watch?v=kev976LKlLg

RED Semiconductor VISC edge AI matrix math IP, RISC-V coprocessor for vision, crypto

Posted by – March 16, 2026
Category: Exclusive videos

RED Semiconductor describes an edge AI approach built around matrix math rather than a conventional CPU-first design. The pitch here is a licensable processor IP block that combines a small RISC-V front end with a dedicated math engine, aiming to reduce data movement, power draw, and latency for workloads that need fast local inference rather than cloud-scale throughput. That makes the discussion relevant for embedded vision, cryptography, sensor processing, and tightly bounded real-time edge AI work https://redsemiconductor.com/

The architecture, called VISC, is presented as a coprocessor rather than a full standalone compute platform. In practical terms, RED is targeting the part of an SoC where matrix multiply, matrix-vector operations, and other repetitive mathematical kernels dominate execution time. The company’s message is that GPUs bring graphics-era overhead, while a conventional NPU may still be too large or too fixed for some deeply embedded deployments, so VISC is meant to sit closer to the math-heavy bottleneck at lower silicon cost.

A key part of the story is software compatibility. RED uses RISC-V as the entry point into toolchains and developer workflows, but the engine itself is not tied only to RISC-V systems and can be integrated alongside Arm or other heterogeneous processor mixes. The company also stresses firmware-level customization, so an OEM can tune the accelerator for a specific vision model, cryptographic routine, or algorithmic pipeline instead of treating AI acceleration as a generic black-box block in the stack.

What stands out in the interview is the emphasis on edge-specific constraints: low power, low memory traffic, fast startup, and deterministic response. RED talks less about large language models and more about vision inference, medical imaging style search, secure compute, and sensor-driven applications where milliseconds, energy budget, and local autonomy matter more than raw datacenter-class scale. That focus fits the broader Embedded World conversation around RISC-V, edge inference, and domain-specific acceleration in Nuremberg during 2026.

The company positions the IP as tileable, licensable, and suitable for inclusion in a broader SoC that may already contain CPUs, vector processors, or other accelerators. RED has also been framing VISC publicly around edge AI, cryptography, and secure processing, with recent company updates pointing to an expanding RISC-V and edge AI roadmap. This video gives a useful look at how RED wants to differentiate: not by replacing every processor in a design, but by offloading the dense mathematical core that defines many embedded AI workloads for edge IP

All my Embedded World videos are in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

source https://www.youtube.com/watch?v=xYVgQoCru_4

Canonical Robotics: Ubuntu Core, ROS, real-time control and fleet observability

Posted by – March 15, 2026
Category: Exclusive videos

Canonical is positioning Ubuntu as infrastructure for robotics rather than just a general Linux distro. In this demo, the focus is on a real-time stack where Ubuntu’s real-time kernel drives a vision-guided pick-and-place flow: AI detects shapes on a moving conveyor, a 3D scene mirrors the process, and the arm adapts with a safety slowdown when a hand enters the zone. It is a useful example of how deterministic control, perception, and simulation can be tied together in one deployment without turning the OS itself into a separate engineering project. https://canonical.com/

A second thread is the Bosch Rexroth integration around ctrlX AUTOMATION, which builds on Ubuntu Core. That matters because Ubuntu Core brings an immutable design, transactional over-the-air updates, rollback, and snap-based packaging with strict confinement. For industrial robotics and machine control, that combination is increasingly relevant: vendors want modular application delivery, cleaner lifecycle management, and a clearer path to compliance and long-term maintenance instead of carrying a custom Linux platform on their own.

The most forward-looking part of the interview is Canonical’s push toward fleet observability and deployable AI components. The planned open-source platform connects device fleets to dashboards and telemetry pipelines using Grafana, Loki, Prometheus, Juju, and charms, which fits the reality of robotics deployments where logs, metrics, and remote supervision matter as much as the robot demo itself. Canonical also points to inference snaps, making it easier to package and run models such as Gemma 3 or NeMoTron on local compute for edge AI and physical AI workflows.

What comes through clearly is that Canonical wants to reduce the hidden platform burden in robotics: patching, OTA infrastructure, application distribution, security hardening, ROS integration, and operations across a fleet. That is especially relevant as robotics companies move from prototype to product and face stricter requirements around uptime, software supply chain control, and regulations such as the Cyber Resilience Act. The pitch is not that Ubuntu builds the robot for you, but that it removes a large amount of undifferentiated platform work so teams can focus on the actual use case and ROI.

The discussion also touches on where the sector is heading. Humanoids are acknowledged as promising but still short of the broad, versatile efficiency often implied by the hype, while simpler mobile manipulation systems appear closer to practical value today. Filmed at Embedded World 2026 in Nuremberg, this interview is really about the software foundation under modern robotics: real-time Linux, ROS, immutable edge systems, secure app delivery, observability, and local AI inference coming together as a production stack rather than a lab demo.

source https://www.youtube.com/watch?v=aeVh5Z3tQcQ

Golioth is acquired by Canonical: Secure Bluetooth OTA, LakeDB and Indirect IoT Device Management

Posted by – March 15, 2026
Category: Exclusive videos

Golioth’s latest demo shows how a non-IP Bluetooth endpoint can be managed through a Bluetooth-to-cellular gateway while staying end-to-end encrypted all the way to the cloud. The gateway forwards traffic, but it cannot inspect payloads or own the security domain, which is a strong fit for industrial sensing, remote peripherals, and indirectly connected devices that still need fleet management, telemetry, and OTA workflows. The broader platform positions this around one control plane for connectivity, data routing, settings, and device lifecycle management. https://golioth.io/

What stands out in the interview is the combination of certificate-based onboarding, cloud-managed settings, streamed sensor data, and firmware rollout to Bluetooth devices that may roam across multiple gateways. In the demo, an accelerometer event is sent upstream, settings are pulled back down from the cloud, and the same path can be used for over-the-air updates. That maps well to real deployments where the endpoint is resource-constrained, intermittently connected, or dependent on another node for backhaul.

The Canonical angle makes the story more important than a single booth demo. Golioth announced on March 3, 2026 that it is now part of Canonical, which helps explain the focus on secure infrastructure, developer tooling, on-prem deployments, and data-sovereignty requirements alongside the managed cloud path. Filmed at Embedded World 2026 in Nuremberg, the discussion gives a practical look at how this stack could sit beside Ubuntu, open-source edge software, and enterprise IoT operations rather than acting as a narrow point product.

There is also a useful architectural point here: Golioth is not limited to Bluetooth. The interview frames Bluetooth as the first implementation of an indirectly connected device model, with the same management pattern extending to CAN, serial, Linux-class hardware, MCU targets, and potentially mesh-capable transports such as OpenThread. That makes the value less about a single radio and more about abstracting the transport layer while keeping a consistent API surface for updates, settings, observability, and device orchestration.

For teams building connected products, this is really a video about secure fleet operations at scale: using CI/CD to publish firmware, targeting subsets of deployed devices through management APIs, validating rollout status, and relying on mechanisms such as MCUboot for image integrity and rollback safety. The result is a clearer picture of how Bluetooth and other non-IP devices can be brought into a modern cloud workflow without giving up security boundaries or developer ergonomics.

source https://www.youtube.com/watch?v=JNguONmVpco

Mobilint ARIES and REGULUS edge AI, MLA400 LLM inference and multi-camera vision

Posted by – March 15, 2026
Category: Exclusive videos

Mobilint frames its edge AI story around efficiency rather than headline TOPS alone. In this booth conversation, the focus is on local inference, cost per watt, and practical deployment formats: USB devices, standalone edge boxes, low-profile PCIe cards, MXM modules, and SoC-class hardware for embedded designs. That fits Mobilint’s broader product stack around the ARIES NPU family, the REGULUS low-power SoC line, and the SDK qb software flow for model conversion and deployment. https://www.mobilint.com/

The demo is really about what edge AI looks like when it is treated as an appliance instead of a cloud extension. Mobilint shows multi-stream computer vision running fully offline, with real-time inference on several video feeds and no dependency on a datacenter link. That makes the pitch relevant for AI security, industrial monitoring, smart city analytics, and other latency-sensitive workloads where privacy, bandwidth, and predictable operating cost matter at the edge.

A big part of the discussion is about scaling from vision to LLM workloads. The speaker describes an M400-class configuration built from four accelerators, aimed at running multiple small language models concurrently and pushing into the roughly 35 to 36 billion parameter range with quantization. That lines up with Mobilint’s current direction: the MLA100 card is positioned around 80 TOPS with 16 GB LPDDR4X and 25 W TDP, while the upcoming MLA400 is presented as a quad-ARIES architecture for higher-throughput workstation and on-prem inference. In that context, the video is less about raw benchmark theater and more about usable local AI for mixed vision and language video.

What makes the booth interesting is the software angle behind the hardware. Mobilint keeps coming back to quantization, compiler tooling, runtime integration, and model adaptation, because edge NPUs live or die by how well they map real models rather than synthetic demos. Its SDK qb is built around framework support for PyTorch, TensorFlow, TFLite and ONNX, with optimization and Int8-oriented deployment aimed at preserving model accuracy while fitting tighter memory and power budgets. That is the practical layer that turns AI silicon into deployable embedded compute.

There is also a broader roadmap underneath the interview. Mobilint has recently been talking about both the ARIES and REGULUS NPU families, with REGULUS targeting compact on-device AI at about 10 TOPS under 3 W and support for 4K video pipelines, while products such as MLX-A1 package the accelerator into a more complete edge box. Seen from Embedded World 2026 in Nuremberg, the message is clear: Mobilint wants to compete where offline inference, multi-camera analytics, quantized LLMs, and power-aware embedded deployment matter more than brute-force datacenter silicon roadmap

source https://www.youtube.com/watch?v=ylvPT1Mlv_g

Toradex Leno, OSM, Verdin i.MX95 and Aquila AM69 edge AI modules

Posted by – March 15, 2026
Category: Exclusive videos

Toradex is positioning its 2026 lineup around a wider spread of system-on-modules, from very small Lenos and OSM designs up to higher-performance Aquila and Verdin families. The key message here is scalability: compact modules for cost-sensitive, high-volume products, and larger pin-compatible platforms for projects that need more I/O, compute, graphics, networking or edge AI. That makes the portfolio relevant for gateways, HMIs, robotics and machine-vision devices, while Toradex keeps leaning on software, documentation and long product life as part of the pitch. https://www.toradex.com/

A big part of the story is the move toward smaller solderable form factors. The 30×30 mm Leno and OSM modules shown here are aimed at designs where pick-and-place assembly, vibration resistance and BOM control matter as much as raw performance. In practice, that means customers can start with a compact module for volume production, while still staying close to the Toradex ecosystem instead of rebuilding everything around a custom board too early.

Further up the stack, Toradex is expanding around NXP’s i.MX 95 and TI’s AM69/TDA4 class of processors. That opens the door to more demanding embedded Linux workloads such as multi-camera vision, industrial control, visual inspection, people counting, robotics and autonomous mobile platforms. In that part of the range, the attraction is not just CPU performance but also integrated NPU, ISP, TSN-capable Ethernet, CAN FD, display pipelines and the kind of mixed real-time plus application processing that industrial OEMs increasingly want at the edge.

The demo also points to how Toradex wants customers to move from module to full platform. Carrier boards such as Clover for Aquila target dense vision and robotics use cases, while industrial gateway products extend the company further into ready-to-deploy edge infrastructure rather than only selling compute modules. That is where the value proposition becomes more complete: SOM, carrier board, BSP, Linux distribution, OTA updates, container workflow and cloud fleet management all tied together in one development path.

What makes the pitch credible is that it is less about a single chip and more about a migration strategy across form factors and price points. The video was filmed at Embedded World 2026 in Nuremberg, and the theme throughout is clear: tiny modules that can still expose Ethernet, display and CAN, midrange platforms built around i.MX 95, and higher-end edge AI with Aquila AM69, all anchored by Torizon OS and Toradex support. The result is a portfolio aimed at companies that need to prototype quickly, then scale without changing software foundations too hard.

All my Embedded World videos are in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

source https://www.youtube.com/watch?v=gvqJLv8yPLM

Blumind AMPL Analog AI at 60 Microwatts for Always-On Audio, Edge Wearables and Vision

Posted by – March 15, 2026
Category: Exclusive videos

Blumind is positioning analog AI as a far-edge compute architecture rather than another digital accelerator story. In this interview, the company outlines how its AMPL platform and BM110 direction target always-on audio inference with extremely low system power, low latency and a direct analog signal path that avoids the usual ADC, DAC and high-speed clock overhead of conventional embedded AI. That makes the pitch especially relevant for wearables, smart glasses, earbuds, remotes and other battery-limited devices where keyword spotting has to stay active all day without burning through the cell. https://blumind.ai/

The key technical claim here is not raw TOPS but energy per inference. Blumind describes a total always-on audio solution around 50 to 60 microwatts, with the chip itself at roughly 20 microamps at 1.8 volts and an analog microphone adding about 20 microamps at 1 volt. In practical terms, that shifts edge AI from “can it run” to “can it remain on continuously” for wake-word detection and other audio-triggered interfaces, which is where always-listening products live or die.

What makes the approach interesting is that the neural network is implemented as dedicated analog hardware rather than as software running on an MCU, CPU, RISC-V or Arm core. The company frames this as a fall-through analog compute network optimized for robustness across process, voltage and temperature variation, while keeping latency low and silicon efficiency high. For embedded engineers, that means a very different design trade-off from standard DSP-plus-microcontroller voice pipelines, especially when standby budget is more important than programmability.

The roadmap goes beyond keyword spotting. Blumind says the same analog architecture can scale from RNN-style audio and time-series workloads toward CNN-based vision tasks and eventually smaller attention or transformer-class models running locally on edge devices. That lines up with the company’s broader messaging around all-analog neural processing in standard CMOS and its push to make the technology available not only as its own ASSP silicon but also as licensable IP for future SoCs and microcontrollers. Filmed at Embedded World 2026 in Nuremberg, this is really a look at how analog inference could carve out a specific role inside next-generation edge AI stacks.

source https://www.youtube.com/watch?v=JWvze2MhVsc

Edge AI Foundation Global Edge AI Community, San Diego 2026, 60+ Partners

Posted by – March 15, 2026
Category: Exclusive videos

Edge AI Foundation is presented here less as a single company than as a coordination layer for the wider edge AI ecosystem: silicon vendors, module makers, toolchains, embedded OEMs, startups, researchers, and system builders working around on-device inference, AIoT, computer vision, sensor fusion, and low-latency AI deployment. The interview frames the foundation as a place where competitors still collaborate, which is a useful way to understand today’s market: edge AI is moving too fast for isolated roadmaps, so shared events, workshops, and cross-vendor discussion have become part of the engineering stack. https://www.edgeaifoundation.org/

What stands out is the mix of audiences and technologies. This is not only for executives or keynote speakers, but also for engineers, program managers, researchers, and developers dealing with real deployment issues such as model optimization, embedded Linux, MCU and MPU design choices, heterogeneous compute, NPU roadmaps, power efficiency, industrial vision, and the tradeoff between cloud AI and local inference. The point is not just to talk about AI in general, but to connect practical embedded workflows with current edge AI architectures.

The discussion also highlights how the foundation’s calendar reflects the speed of the sector. The upcoming San Diego event is described as a three-day meeting point with partner exhibition tables, workshops, keynote sessions, and a research track, which fits the broader shift toward tighter interaction between commercial edge AI platforms and academia. That matters because edge AI is now shaped as much by deployment constraints like thermals, bandwidth, privacy, deterministic response, and cost per watt as by raw model capability.

Another useful detail is the partner network itself. The transcript references a community spanning large established players and newer entrants, and that is increasingly where edge AI momentum is coming from: partnerships between silicon companies, board vendors, software ecosystems, and vertical solution providers. Filmed at Embedded World 2026 in Nuremberg, the interview captures that industry mood well, with the foundation positioning itself as a neutral meeting ground for the people building the next generation of embedded AI systems.

source https://www.youtube.com/watch?v=R7_x6TAypg0

Geniatech Edge AI and ePaper at Embedded World 2026: i.MX95, RK3588, Kinara, Hailo

Posted by – March 15, 2026
Category: Exclusive videos

Geniatech presents a broad ARM-based embedded portfolio built around edge AI hardware, BSP-level software work, and customization services rather than a single demo board. The video focuses on how the company combines SoMs, SBCs, gateways, AI boxes and ePaper platforms with kernel, SDK and API support, so customers can move from evaluation to deployment without rebuilding the whole stack. The central theme is local inference on compact ARM systems, where Geniatech positions quantized and compressed LLM and VLM workloads as practical on-device workloads instead of cloud-only tasks. https://www.geniatech.com/

A key part of that story is heterogeneous edge AI acceleration. In the booth tour, Geniatech shows NXP and Rockchip based platforms paired with M.2 AI modules and explains the split between computer-vision accelerators and LLM-oriented parts. That maps well to the company’s current platform direction: i.MX95 systems with optional M.2 expansion, RK3588 designs, and accelerator options such as Kinara for transformer-style workloads or Hailo for CNN-heavy vision pipelines. The interesting angle here is not just raw TOPS, but memory footprint, quantization, driver porting, and how much of the model can realistically stay on the device.

The demo of a local multimodal assistant makes that concrete. A camera-equipped edge box estimates who is in front of it, feeds selected prompts into a locally deployed model, and returns results every few seconds without a cloud round trip. That matters for privacy, latency, and deterministic deployment in retail, kiosks, transport, and industrial settings. Geniatech’s role in this stack is mostly the infrastructure layer: stable ARM hardware, Linux BSP work, accelerator integration, conversion toolchains, NPU APIs, and support for customers training or adapting their own models.

The second half of the video shifts to ePaper, and this is where Geniatech looks unusually vertically integrated. Instead of treating ePaper as just a panel sourcing business, the company talks about its own TCON and software optimization, faster refresh behavior, and end-to-end system design for signage. The bus-stop example, multi-panel drive capability, indoor-light energy harvesting concepts, and wide-temperature operation point to transport and outdoor display use cases where low power draw matters as much as color or refresh performance.

Filmed at Embedded World 2026 in Nuremberg, the booth tour shows Geniatech as a company trying to connect two markets that are starting to overlap: edge AI compute and ultra-low-power visual interfaces. On one side, there is ARM edge hardware with i.MX95, RK3588, AI modules, local LLM support and carrier-board customization. On the other, there are Spectra 6 style color ePaper and alternative reflective display approaches for signage, pricing, and information systems. Put together, it is a practical embedded roadmap for devices that need local intelligence, low power, industrial design flexibility, and long lifecycle support.

All my Embedded World videos are in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

source https://www.youtube.com/watch?v=xwuZf8M2k_E

Microchip Booth Tour at Embedded World 2026: Edge AI, 10BASE-T1S, RISC-V, ADAS, Security

Posted by – March 14, 2026
Category: Exclusive videos

Microchip’s booth tour is less about a single flagship chip and more about how the company is stitching together the embedded stack: edge AI, industrial networking, automotive camera links, HMI, security and power electronics. The demos show Microchip positioning itself as a broad platform vendor, not just a microcontroller supplier, with current emphasis on AIoT, 10BASE-T1S, TSN, Zephyr, Linux, secure MCUs and MPUs, and reference designs that shorten evaluation cycles for OEMs. https://www.microchip.com/en-us/about/events-info/embedded-world

The access-control and cockpit demos reflect two themes that now run through a lot of embedded design: local inference and human-machine interaction. Facial recognition with liveness detection, round-display touch interfaces, and color-sorting machine vision are shown here not as isolated gimmicks but as edge workloads that need low latency, deterministic control and a practical HMI layer. That also fits with Microchip’s current demo lineup around graphics, touch, camera systems and AI at the edge.

A stronger technical thread in the video is networking. The shop-floor setup points to Single Pair Ethernet, especially 10BASE-T1S, as a path away from older fieldbus designs toward IP-based industrial systems with simpler wiring, real-time behavior and easier IT/OT integration. Microchip is explicitly framing this around industrial Ethernet migration, TSN-capable architectures, open-source software stacks and modular evaluation hardware built around boards that can be quickly reconfigured for demos or first customer trials.

Security is treated here as infrastructure rather than a feature checkbox. The tour touches secure boot, secure firmware update, key provisioning, post-quantum cryptography and Cyber Resilience Act readiness, including Microchip’s security portfolio and its work with Kudelski IoT keySTREAM for device provisioning and update workflows. In practice, that makes the video relevant to anyone designing industrial or edge products that now need lifecycle security, not just network connectivity and compute.

The automotive and high-performance pieces round out the picture: ASA-ML serializer/deserializer links for ADAS camera paths into Qualcomm Ride platforms, FPGA-based sensor fusion around AI accelerators, MICROSAR IO with Vector for compact ECUs, and a RISC-V story spanning PolarFire SoC FPGA and the newer PIC64 family. Taken together, the booth shows Microchip pushing toward distributed intelligence where control, networking, security and inference sit closer to the machine, a message delivered from the company’s stand at Embedded World 2026 in Nuremberg.

source https://www.youtube.com/watch?v=2bXmkl934mI

JetBrains Embedded Development with CLion, AI Agents, ESP32, ST, Zephyr, Local AI

Posted by – March 14, 2026
Category: Exclusive videos

JetBrains is framing embedded development less as a board-specific workflow and more as a unified software engineering problem. In this conversation, the focus is CLion as the company’s embedded IDE for C, C++ and Rust, aimed at reducing the fragmentation that comes from switching between vendor SDKs, toolchains, debuggers and separate utilities. The key idea is a consistent developer experience across targets such as Espressif and STMicroelectronics, with support for frameworks like Zephyr and modern build flows around CMake, so firmware work can happen inside one environment instead of being spread across multiple disconnected tools. https://www.jetbrains.com/clion/embedded/

A big part of that story is AI, but in a practical embedded context rather than as a generic chatbot layer. JetBrains shows agent support directly inside the IDE, including Junie, external agents, MCP connectivity and bring-your-own-key workflows, with the emphasis on tool grounding and agent orchestration rather than just the raw model. That matters for firmware teams because the useful part is not only code generation, but being able to trigger project-aware actions such as rebuilds, refreshes, navigation and other IDE-native operations in a controlled way.

The interview also points to a broader shift in embedded engineering: local and on-premises AI is becoming relevant for teams that cannot send code or design data to public cloud services. JetBrains is clearly leaning into that requirement, showing local AI running on NVIDIA hardware and discussing private deployment models for LLM-backed development. For regulated sectors and larger product teams, that makes the IDE part of a secure internal toolchain rather than a thin client to an external service.

What makes the booth discussion interesting is that it connects classic embedded pain points with current software trends. CLion is presented as a bridge between microcontroller and SoC projects, vendor ecosystems, RTOS-oriented work and newer AI-assisted flows, while keeping the core promise around productivity, code intelligence and debugging. Filmed at Embedded World 2026 in Nuremberg, the video captures how JetBrains is positioning embedded work alongside mainstream software development instead of treating it as a separate niche.

The result is a view of embedded development where the IDE becomes the integration layer for toolchains, frameworks, AI agents and secure deployment options. Rather than chasing a single board demo, JetBrains is making the case that teams at companies such as automotive and industrial OEMs need a stable, extensible workspace that can handle Zephyr, ESP-IDF, STM32-class projects, CMake-based builds, Rust support and agentic coding in the same place. That makes this less about one feature and more about how firmware teams may want to structure their workflow over the next few years.

All my Embedded World videos are in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

source https://www.youtube.com/watch?v=cYpC1drBfqg

TeleCANesis at Embedded World 2026: Hub for CAN, Modbus, I2C, Cloud, HMI and AI Data Routing

Posted by – March 14, 2026
Category: Exclusive videos

TeleCANesis is tackling a familiar embedded problem: too many devices, buses and software stacks speaking incompatible dialects. The platform is positioned as thin middleware plus tooling for protocol mapping, message routing and automated code generation, so teams can connect CAN, Modbus, I2C, SPI, RS485, Ethernet and higher-level interfaces without rewriting glue code every time a signal layout changes. In practice, the value is less about “moving data” in the abstract and more about preserving engineering time for product logic, analytics and HMI work. https://telecanesis.com/

What stands out in this demo is the workflow refinement inside the web-based Hub. Codecs are becoming system-wide rather than tied to a single capsule, which makes reuse much cleaner across a blueprint. The new imports flow also looks more practical for DBC-driven design: engineers can ingest a file once, label it, selectively pull only the required messages into each capsule, and later re-import changed definitions instead of rebuilding the whole route map. That is a meaningful shift for teams dealing with evolving vehicle, battery or industrial bus definitions over time.

The use case described here is a good fit for battery systems, domain controllers and other heterogeneous embedded environments where one internal data model has to feed cloud services, databases, HMIs and mobile apps in different formats. Rather than expose every raw signal upstream, TeleCANesis lets developers normalize data internally and publish only the subset that matters to customers or backend services. Filmed at Embedded World 2026 in Nuremberg, the demo also hints at where the product is moving next, with broader plug-in support, updated ingestion in the coming 1.1 release, and recent additions such as CANopen and serial connector plug-ins.

There is also a practical deployment story behind it. The runtime is presented as largely platform-agnostic, with only a thin OS and compiler abstraction layer needing adaptation, which makes ports to new ARM or MCU targets much faster than a typical middleware stack. The company points to support around QNX, Raspberry Pi 4 and 5, Yocto Scarthgap, and integration paths toward HMI frameworks such as Qt, Slint, GL Studio and Unity. That combination makes the tool relevant not only for automotive-style gateways but also for industrial control, robotics and connected equipment.

The AI angle is still early, but the direction makes sense: use AI to inspect an existing project, identify protocols and messages, and pre-build the TeleCANesis blueprint so engineers start from a working draft instead of a blank canvas. For teams building software-defined machines, cloud-connected controllers or AI-assisted products, that could make TeleCANesis a useful bridge between fieldbus data, application logic and agent workflows. The core idea is straightforward: stop hand-coding translation layers every time the system grows, and treat connectivity as a configurable part of the architecture instead of a recurring rewrite.

All my Embedded World videos are in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

source https://www.youtube.com/watch?v=MvX0zdWJ0fY