ProvenRun ProvenCore EAL7, Automotive Ethernet Protocol Break, Formal OS, ProvenHSM, STM32H5, PQC

Posted by – March 16, 2026
Category: Exclusive videos

ProvenRun is making a case for embedded security that starts below the application layer, with a mathematically verified trusted base rather than another add-on middleware stack. In this interview, the company explains how ProvenCore, its formally proven secure OS and TEE, is used to build high-assurance systems for automotive, avionics, defense, microcontrollers and cloud security, with the goal of reducing attack surface, simplifying certification and keeping long lifecycle products maintainable. https://provenrun.com/

A big part of the discussion is the shift to software-defined vehicles and zonal automotive Ethernet. ProvenRun’s protocol-break approach fully deconstructs and reconstructs traffic between exposed domains and safety-critical zones, rather than relying only on segmentation. That matters for in-vehicle infotainment, connectivity modems and ADAS paths, where 1GbE and faster links now carry far more critical traffic than older in-car networks ever did.

The technical differentiator is formal methods. ProvenRun says ProvenCore remains the only operating system certified at Common Criteria EAL7, and that foundation is then reused for trusted applications such as secure storage, cryptography, PKCS#11, VPN, network stacks, secure firmware update and protocol filtering. The company also highlights compatibility with standard embedded security ecosystems including GlobalPlatform, PSA-style APIs, Android trusted applications and post-quantum cryptography work with CryptoNext.

The interview also touches the microcontroller side, where ProvenCore-M is positioned as a secure RTOS and TEE for Arm v8-M class devices, including ST deployments around STM32 security architectures. That gives developers a pre-certified route to TrustZone-based isolation, secure services and easier product evaluation without having to design every security primitive from scratch. Filmed at Embedded World 2026 in Nuremberg, the demo shows how that same security-by-design philosophy is now being stretched from MCU roots into automotive gateways and trusted edge compute.

On the cloud side, ProvenRun is pushing ProvenHSM and ProvenBox as remotely manageable hardware-backed trust anchors for key management, crypto services and customizable secure applications. The interesting angle is not just HSM throughput, but compositional certification, cloud-native administration, FPGA-assisted crypto acceleration and a roadmap that includes PQC readiness. Overall, this is a useful look at how embedded cybersecurity is moving toward verifiable isolation, certifiable trusted execution and longer-term lifecycle assurance across both edge and data center scale.

All my Embedded World videos are in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

source https://www.youtube.com/watch?v=Cmz3ENmAYPs

eSOL eMCOS POSIX RTOS, ROS Middleware, Multicore ARM Cortex and RISC-V Embedded Full Stack

Posted by – March 16, 2026
Category: Exclusive videos

eSOL positions itself as a full-stack embedded software partner rather than a vendor selling only one RTOS layer. The core message in this interview is integration: a production-ready platform that combines the eMCOS real-time operating system, a POSIX-compliant profile, middleware for networking and robotics-oriented workflows, plus engineering services that extend from bring-up to certification. That matters for teams trying to reduce supplier fragmentation and keep one accountable path from hardware integration to deployed code. https://www.esol.com/

A key theme is the gap between prototype software and certifiable production systems. The demo points to ROS and model-based toolchains as part of the ecosystem, but the argument from eSOL is that open robotics frameworks alone are not always enough once determinism, safety, and real-time behavior become mandatory. In that context, eMCOS POSIX is presented as a way to preserve familiar POSIX development models while moving toward tighter scheduling control, certification targets, and system-level integration for embedded products.

What makes the platform interesting technically is scalability across compute classes. In the demo, the same runtime approach spans ARM Cortex-M, ARM Cortex-R, ARM Cortex-A and also RISC-V, reflecting eSOL’s long-standing focus on multi-core and many-core embedded architectures. That gives the interview a broader angle than a simple RTOS pitch: it is really about one software foundation that can move from small microcontrollers to larger heterogeneous SoCs without forcing a complete tooling reset or a redesign of the application stack at every step.

Recent eSOL direction adds useful context to what is shown here. The company has been expanding its Full Stack Engineering model in Europe, and its eMCOS POSIX profile gained ISO 26262 ASIL D compliance in 2025, which reinforces the interview’s emphasis on automotive-grade real-time software. eSOL has also been showing eMCOS in software-defined vehicle workflows, including virtual-platform work around Renesas R-Car, so the message here fits a wider industry push toward software-first development, safety partitioning, and faster validation at scale.

Overall, this is less about Linux replacement rhetoric and more about where a deterministic POSIX RTOS fits when embedded teams need predictable latency, certification support, multicore scaling, and one engineering interface across the stack. The interview was filmed at Embedded World 2026 in Nuremberg, and it frames eSOL as a company targeting automotive, robotics, industrial and medical designs where middleware compatibility, long-term support, and integration ownership are often worth as much as raw kernel features in practice here.

All my Embedded World videos are in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

source https://www.youtube.com/watch?v=iEaaI6PVweQ

ATGBICS at Embedded World 2026: Compatible Transceivers, Legacy Optics, 800G QSFP, DAC and AOC

Posted by – March 16, 2026
Category: Exclusive videos

ATGBICS is positioning itself as a practical supplier for industrial network connectivity rather than just another optics reseller. The main story here is compatibility at scale: transceivers for more than 300 vendor ecosystems, support for legacy and current modules, and a business model built around keeping networks running when original OEM parts have gone end-of-life. That matters in embedded and industrial systems where redesigning around a discontinued optical part can be far more expensive than the module itself. https://atgbics.com/

A big part of the discussion is obsolescence management. ATGBICS describes a process where the bill of materials is locked, prototype samples can be validated against a customer’s hardware, and repeat orders can be built with the same chipset, laser, and configuration that was previously qualified. For industrial Ethernet, long-lived automation platforms, transport systems, and ruggedized infrastructure, that kind of traceability can be more important than chasing the newest data rate.

The interview also makes clear that this is not only about old through-hole optics from the 1990s. The portfolio shown moves from 1×9 and 2×5 legacy transceivers to 1G and 10G workhorse SFP-class modules, then all the way up to high-bandwidth QSFP and direct attach cable options used in data center and AI networking. The interesting angle is that the same company is covering both ends of the market: replacement parts for installed industrial gear and compatible modules for newer high-density switching environments.

What gives the video some depth is the manufacturing and customization side. ATGBICS talks about working with factory partners in Taiwan and China, offering certificates of conformity, custom firmware, private labeling, and barcode-level branding for OEMs building their own switch, router, or PoE product lines. Filmed at Embedded World 2026 in Nuremberg, the interview shows how optical connectivity is increasingly tied to supply-chain resilience, second-source qualification, and lifecycle planning, not just raw bandwidth.

The result is a useful look at a part of embedded infrastructure that usually stays in the background. Instead of focusing on headline silicon, this conversation is about pluggable optics, DACs, AOCs, OEM-compatible coding, industrial temperature requirements, and the economics of keeping deployed systems alive for years longer than the original vendor may support. That makes the video relevant for engineers, sourcing teams, EMS partners, and network equipment makers dealing with both legacy maintenance and forward migration.

All my Embedded World videos are in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

source https://www.youtube.com/watch?v=h5iWToDbxh4

Weebit Nano ReRAM for Edge AI, Embedded NVM, Near-Memory Compute and SoC Integration

Posted by – March 16, 2026
Category: Exclusive videos

Weebit Nano is positioning ReRAM as an embedded non-volatile memory alternative to flash for SoCs that need faster writes, lower power, better endurance, and easier scaling below 28 nm. In this interview, CEO Coby Hanoch explains why the company focuses on embedded NVM rather than bulk storage: the target is firmware, security keys, calibration data, AI coefficients, and instant-on system behavior integrated directly on the same die as compute and control logic. https://www.weebit-nano.com/

The key technical point is that Weebit’s ReRAM is a back-end-of-line technology, built between metal layers rather than in the silicon substrate. That matters for mixed-signal and analog-heavy designs, because it avoids many of the layout and process compromises associated with embedded flash. Hanoch describes the cell in simple terms: voltage moves ions to form or break a conductive path, switching between low and high resistance states that represent stored data.

For edge AI, the pitch is especially clear. If model coefficients can live in embedded non-volatile memory on the AI chip, designers can avoid a separate external flash device, reduce board cost, shorten boot time, cut power draw, and remove a security exposure created when weights are copied at startup. That fits near-memory compute, and it also points toward in-memory compute, where analog-style ReRAM arrays may eventually support more efficient AI inference for gesture recognition, sensor workloads, and always-on edge devices.

The interview also shows why this matters beyond AI. Embedded ReRAM is relevant for power management ICs, MCUs, IoT nodes, automotive electronics, and aerospace-oriented designs that need retention without power, robust endurance, and tolerance for harsh conditions. Weebit highlights qualification work for automotive temperature ranges, radiation immunity as a useful characteristic, and the benefit of integrating memory without disturbing the optimal analog portion of a chip.

Filmed at Embedded World 2026 in Nuremberg, the discussion captures a memory company moving from R&D into commercialization. Weebit already talks about customers such as onsemi and Texas Instruments, growing capacity targets in the embedded range, and a roadmap that connects embedded NVM with future AI architectures. The result is not “more storage” in the consumer sense, but a more integrated memory block for edge silicon where power, cost, area, boot latency, and security all matter at once.

All my Embedded World videos are in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

source https://www.youtube.com/watch?v=dn82VxEX4aI

Tektronix IsoVu TIVP, TICP and 7 Series DPO for SiC, GaN and power integrity

Posted by – March 16, 2026
Category: Exclusive videos

Tektronix focuses here on one of the harder measurement problems in modern power electronics: capturing fast, high-voltage switching behavior without corrupting the waveform through probe loading, ground noise, or isolation limits. The interview centers on the second-generation IsoVu isolated voltage probe, where optical power delivery over glass fiber lets the probe head stay electrically isolated while still measuring very small and very fast events. That matters for SiC and GaN power stages, where dv/dt, common-mode noise, and switching transients quickly expose the limits of conventional probing. https://www.tektronix.com/

A key point in the demo is flexibility at the probe tip. The discussion mentions interchangeable tips spanning low-voltage work up to kilovolt-class measurements, which fits the broader need to move between gate-drive, shunt, switch-node, and bus measurements without rebuilding the whole setup. Tektronix also highlights its isolated current probing, including an RF link architecture with no direct physical connection inside the probe path, aimed at very high common-mode rejection. In practice, this is the kind of tooling engineers need for double-pulse test setups, power integrity analysis, wide-bandgap converter design, and validation of fast-switching inverter stages. ([tektronix.com][1])

What makes the video interesting is that it is less about headline specs and more about measurement credibility. The screen demo compares a reference voltage with current captured through the isolated current probe, showing how Tektronix is positioning these probes as part of a complete power integrity workflow rather than as standalone accessories. That fits a broader shift in lab instrumentation, where probe architecture, tip ecosystem, connection standards, and noise rejection are becoming just as important as oscilloscope bandwidth. The clip was filmed at Embedded World 2026 in Nuremberg, where this kind of test and measurement detail is especially relevant for embedded power, automotive, industrial control, and energy conversion teams.

The booth tour also briefly points to Tektronix’s wider high-speed instrumentation stack, including the 7 Series DPO at up to 25 GHz and 125 GS/s, plus the DPO70000SX platform, which Tektronix lists up to 70 GHz and 200 GS/s for very high-speed serial, PCIe, memory, and signal-integrity work. So the story here is really two layers of debug: precision isolated probing for power devices such as SiC and GaN MOSFETs, and high-bandwidth scope platforms for the digital and interconnect side of the same system.

source https://www.youtube.com/watch?v=kev976LKlLg

RED Semiconductor VISC edge AI matrix math IP, RISC-V coprocessor for vision, crypto

Posted by – March 16, 2026
Category: Exclusive videos

RED Semiconductor describes an edge AI approach built around matrix math rather than a conventional CPU-first design. The pitch here is a licensable processor IP block that combines a small RISC-V front end with a dedicated math engine, aiming to reduce data movement, power draw, and latency for workloads that need fast local inference rather than cloud-scale throughput. That makes the discussion relevant for embedded vision, cryptography, sensor processing, and tightly bounded real-time edge AI work https://redsemiconductor.com/

The architecture, called VISC, is presented as a coprocessor rather than a full standalone compute platform. In practical terms, RED is targeting the part of an SoC where matrix multiply, matrix-vector operations, and other repetitive mathematical kernels dominate execution time. The company’s message is that GPUs bring graphics-era overhead, while a conventional NPU may still be too large or too fixed for some deeply embedded deployments, so VISC is meant to sit closer to the math-heavy bottleneck at lower silicon cost.

A key part of the story is software compatibility. RED uses RISC-V as the entry point into toolchains and developer workflows, but the engine itself is not tied only to RISC-V systems and can be integrated alongside Arm or other heterogeneous processor mixes. The company also stresses firmware-level customization, so an OEM can tune the accelerator for a specific vision model, cryptographic routine, or algorithmic pipeline instead of treating AI acceleration as a generic black-box block in the stack.

What stands out in the interview is the emphasis on edge-specific constraints: low power, low memory traffic, fast startup, and deterministic response. RED talks less about large language models and more about vision inference, medical imaging style search, secure compute, and sensor-driven applications where milliseconds, energy budget, and local autonomy matter more than raw datacenter-class scale. That focus fits the broader Embedded World conversation around RISC-V, edge inference, and domain-specific acceleration in Nuremberg during 2026.

The company positions the IP as tileable, licensable, and suitable for inclusion in a broader SoC that may already contain CPUs, vector processors, or other accelerators. RED has also been framing VISC publicly around edge AI, cryptography, and secure processing, with recent company updates pointing to an expanding RISC-V and edge AI roadmap. This video gives a useful look at how RED wants to differentiate: not by replacing every processor in a design, but by offloading the dense mathematical core that defines many embedded AI workloads for edge IP

All my Embedded World videos are in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

source https://www.youtube.com/watch?v=xYVgQoCru_4

Canonical Robotics: Ubuntu Core, ROS, real-time control and fleet observability

Posted by – March 15, 2026
Category: Exclusive videos

Canonical is positioning Ubuntu as infrastructure for robotics rather than just a general Linux distro. In this demo, the focus is on a real-time stack where Ubuntu’s real-time kernel drives a vision-guided pick-and-place flow: AI detects shapes on a moving conveyor, a 3D scene mirrors the process, and the arm adapts with a safety slowdown when a hand enters the zone. It is a useful example of how deterministic control, perception, and simulation can be tied together in one deployment without turning the OS itself into a separate engineering project. https://canonical.com/

A second thread is the Bosch Rexroth integration around ctrlX AUTOMATION, which builds on Ubuntu Core. That matters because Ubuntu Core brings an immutable design, transactional over-the-air updates, rollback, and snap-based packaging with strict confinement. For industrial robotics and machine control, that combination is increasingly relevant: vendors want modular application delivery, cleaner lifecycle management, and a clearer path to compliance and long-term maintenance instead of carrying a custom Linux platform on their own.

The most forward-looking part of the interview is Canonical’s push toward fleet observability and deployable AI components. The planned open-source platform connects device fleets to dashboards and telemetry pipelines using Grafana, Loki, Prometheus, Juju, and charms, which fits the reality of robotics deployments where logs, metrics, and remote supervision matter as much as the robot demo itself. Canonical also points to inference snaps, making it easier to package and run models such as Gemma 3 or NeMoTron on local compute for edge AI and physical AI workflows.

What comes through clearly is that Canonical wants to reduce the hidden platform burden in robotics: patching, OTA infrastructure, application distribution, security hardening, ROS integration, and operations across a fleet. That is especially relevant as robotics companies move from prototype to product and face stricter requirements around uptime, software supply chain control, and regulations such as the Cyber Resilience Act. The pitch is not that Ubuntu builds the robot for you, but that it removes a large amount of undifferentiated platform work so teams can focus on the actual use case and ROI.

The discussion also touches on where the sector is heading. Humanoids are acknowledged as promising but still short of the broad, versatile efficiency often implied by the hype, while simpler mobile manipulation systems appear closer to practical value today. Filmed at Embedded World 2026 in Nuremberg, this interview is really about the software foundation under modern robotics: real-time Linux, ROS, immutable edge systems, secure app delivery, observability, and local AI inference coming together as a production stack rather than a lab demo.

source https://www.youtube.com/watch?v=aeVh5Z3tQcQ

Golioth is acquired by Canonical: Secure Bluetooth OTA, LakeDB and Indirect IoT Device Management

Posted by – March 15, 2026
Category: Exclusive videos

Golioth’s latest demo shows how a non-IP Bluetooth endpoint can be managed through a Bluetooth-to-cellular gateway while staying end-to-end encrypted all the way to the cloud. The gateway forwards traffic, but it cannot inspect payloads or own the security domain, which is a strong fit for industrial sensing, remote peripherals, and indirectly connected devices that still need fleet management, telemetry, and OTA workflows. The broader platform positions this around one control plane for connectivity, data routing, settings, and device lifecycle management. https://golioth.io/

What stands out in the interview is the combination of certificate-based onboarding, cloud-managed settings, streamed sensor data, and firmware rollout to Bluetooth devices that may roam across multiple gateways. In the demo, an accelerometer event is sent upstream, settings are pulled back down from the cloud, and the same path can be used for over-the-air updates. That maps well to real deployments where the endpoint is resource-constrained, intermittently connected, or dependent on another node for backhaul.

The Canonical angle makes the story more important than a single booth demo. Golioth announced on March 3, 2026 that it is now part of Canonical, which helps explain the focus on secure infrastructure, developer tooling, on-prem deployments, and data-sovereignty requirements alongside the managed cloud path. Filmed at Embedded World 2026 in Nuremberg, the discussion gives a practical look at how this stack could sit beside Ubuntu, open-source edge software, and enterprise IoT operations rather than acting as a narrow point product.

There is also a useful architectural point here: Golioth is not limited to Bluetooth. The interview frames Bluetooth as the first implementation of an indirectly connected device model, with the same management pattern extending to CAN, serial, Linux-class hardware, MCU targets, and potentially mesh-capable transports such as OpenThread. That makes the value less about a single radio and more about abstracting the transport layer while keeping a consistent API surface for updates, settings, observability, and device orchestration.

For teams building connected products, this is really a video about secure fleet operations at scale: using CI/CD to publish firmware, targeting subsets of deployed devices through management APIs, validating rollout status, and relying on mechanisms such as MCUboot for image integrity and rollback safety. The result is a clearer picture of how Bluetooth and other non-IP devices can be brought into a modern cloud workflow without giving up security boundaries or developer ergonomics.

source https://www.youtube.com/watch?v=JNguONmVpco

Mobilint ARIES and REGULUS edge AI, MLA400 LLM inference and multi-camera vision

Posted by – March 15, 2026
Category: Exclusive videos

Mobilint frames its edge AI story around efficiency rather than headline TOPS alone. In this booth conversation, the focus is on local inference, cost per watt, and practical deployment formats: USB devices, standalone edge boxes, low-profile PCIe cards, MXM modules, and SoC-class hardware for embedded designs. That fits Mobilint’s broader product stack around the ARIES NPU family, the REGULUS low-power SoC line, and the SDK qb software flow for model conversion and deployment. https://www.mobilint.com/

The demo is really about what edge AI looks like when it is treated as an appliance instead of a cloud extension. Mobilint shows multi-stream computer vision running fully offline, with real-time inference on several video feeds and no dependency on a datacenter link. That makes the pitch relevant for AI security, industrial monitoring, smart city analytics, and other latency-sensitive workloads where privacy, bandwidth, and predictable operating cost matter at the edge.

A big part of the discussion is about scaling from vision to LLM workloads. The speaker describes an M400-class configuration built from four accelerators, aimed at running multiple small language models concurrently and pushing into the roughly 35 to 36 billion parameter range with quantization. That lines up with Mobilint’s current direction: the MLA100 card is positioned around 80 TOPS with 16 GB LPDDR4X and 25 W TDP, while the upcoming MLA400 is presented as a quad-ARIES architecture for higher-throughput workstation and on-prem inference. In that context, the video is less about raw benchmark theater and more about usable local AI for mixed vision and language video.

What makes the booth interesting is the software angle behind the hardware. Mobilint keeps coming back to quantization, compiler tooling, runtime integration, and model adaptation, because edge NPUs live or die by how well they map real models rather than synthetic demos. Its SDK qb is built around framework support for PyTorch, TensorFlow, TFLite and ONNX, with optimization and Int8-oriented deployment aimed at preserving model accuracy while fitting tighter memory and power budgets. That is the practical layer that turns AI silicon into deployable embedded compute.

There is also a broader roadmap underneath the interview. Mobilint has recently been talking about both the ARIES and REGULUS NPU families, with REGULUS targeting compact on-device AI at about 10 TOPS under 3 W and support for 4K video pipelines, while products such as MLX-A1 package the accelerator into a more complete edge box. Seen from Embedded World 2026 in Nuremberg, the message is clear: Mobilint wants to compete where offline inference, multi-camera analytics, quantized LLMs, and power-aware embedded deployment matter more than brute-force datacenter silicon roadmap

source https://www.youtube.com/watch?v=ylvPT1Mlv_g

Toradex Leno, OSM, Verdin i.MX95 and Aquila AM69 edge AI modules

Posted by – March 15, 2026
Category: Exclusive videos

Toradex is positioning its 2026 lineup around a wider spread of system-on-modules, from very small Lenos and OSM designs up to higher-performance Aquila and Verdin families. The key message here is scalability: compact modules for cost-sensitive, high-volume products, and larger pin-compatible platforms for projects that need more I/O, compute, graphics, networking or edge AI. That makes the portfolio relevant for gateways, HMIs, robotics and machine-vision devices, while Toradex keeps leaning on software, documentation and long product life as part of the pitch. https://www.toradex.com/

A big part of the story is the move toward smaller solderable form factors. The 30×30 mm Leno and OSM modules shown here are aimed at designs where pick-and-place assembly, vibration resistance and BOM control matter as much as raw performance. In practice, that means customers can start with a compact module for volume production, while still staying close to the Toradex ecosystem instead of rebuilding everything around a custom board too early.

Further up the stack, Toradex is expanding around NXP’s i.MX 95 and TI’s AM69/TDA4 class of processors. That opens the door to more demanding embedded Linux workloads such as multi-camera vision, industrial control, visual inspection, people counting, robotics and autonomous mobile platforms. In that part of the range, the attraction is not just CPU performance but also integrated NPU, ISP, TSN-capable Ethernet, CAN FD, display pipelines and the kind of mixed real-time plus application processing that industrial OEMs increasingly want at the edge.

The demo also points to how Toradex wants customers to move from module to full platform. Carrier boards such as Clover for Aquila target dense vision and robotics use cases, while industrial gateway products extend the company further into ready-to-deploy edge infrastructure rather than only selling compute modules. That is where the value proposition becomes more complete: SOM, carrier board, BSP, Linux distribution, OTA updates, container workflow and cloud fleet management all tied together in one development path.

What makes the pitch credible is that it is less about a single chip and more about a migration strategy across form factors and price points. The video was filmed at Embedded World 2026 in Nuremberg, and the theme throughout is clear: tiny modules that can still expose Ethernet, display and CAN, midrange platforms built around i.MX 95, and higher-end edge AI with Aquila AM69, all anchored by Torizon OS and Toradex support. The result is a portfolio aimed at companies that need to prototype quickly, then scale without changing software foundations too hard.

All my Embedded World videos are in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

source https://www.youtube.com/watch?v=gvqJLv8yPLM

Blumind AMPL Analog AI at 60 Microwatts for Always-On Audio, Edge Wearables and Vision

Posted by – March 15, 2026
Category: Exclusive videos

Blumind is positioning analog AI as a far-edge compute architecture rather than another digital accelerator story. In this interview, the company outlines how its AMPL platform and BM110 direction target always-on audio inference with extremely low system power, low latency and a direct analog signal path that avoids the usual ADC, DAC and high-speed clock overhead of conventional embedded AI. That makes the pitch especially relevant for wearables, smart glasses, earbuds, remotes and other battery-limited devices where keyword spotting has to stay active all day without burning through the cell. https://blumind.ai/

The key technical claim here is not raw TOPS but energy per inference. Blumind describes a total always-on audio solution around 50 to 60 microwatts, with the chip itself at roughly 20 microamps at 1.8 volts and an analog microphone adding about 20 microamps at 1 volt. In practical terms, that shifts edge AI from “can it run” to “can it remain on continuously” for wake-word detection and other audio-triggered interfaces, which is where always-listening products live or die.

What makes the approach interesting is that the neural network is implemented as dedicated analog hardware rather than as software running on an MCU, CPU, RISC-V or Arm core. The company frames this as a fall-through analog compute network optimized for robustness across process, voltage and temperature variation, while keeping latency low and silicon efficiency high. For embedded engineers, that means a very different design trade-off from standard DSP-plus-microcontroller voice pipelines, especially when standby budget is more important than programmability.

The roadmap goes beyond keyword spotting. Blumind says the same analog architecture can scale from RNN-style audio and time-series workloads toward CNN-based vision tasks and eventually smaller attention or transformer-class models running locally on edge devices. That lines up with the company’s broader messaging around all-analog neural processing in standard CMOS and its push to make the technology available not only as its own ASSP silicon but also as licensable IP for future SoCs and microcontrollers. Filmed at Embedded World 2026 in Nuremberg, this is really a look at how analog inference could carve out a specific role inside next-generation edge AI stacks.

source https://www.youtube.com/watch?v=JWvze2MhVsc

Edge AI Foundation Global Edge AI Community, San Diego 2026, 60+ Partners

Posted by – March 15, 2026
Category: Exclusive videos

Edge AI Foundation is presented here less as a single company than as a coordination layer for the wider edge AI ecosystem: silicon vendors, module makers, toolchains, embedded OEMs, startups, researchers, and system builders working around on-device inference, AIoT, computer vision, sensor fusion, and low-latency AI deployment. The interview frames the foundation as a place where competitors still collaborate, which is a useful way to understand today’s market: edge AI is moving too fast for isolated roadmaps, so shared events, workshops, and cross-vendor discussion have become part of the engineering stack. https://www.edgeaifoundation.org/

What stands out is the mix of audiences and technologies. This is not only for executives or keynote speakers, but also for engineers, program managers, researchers, and developers dealing with real deployment issues such as model optimization, embedded Linux, MCU and MPU design choices, heterogeneous compute, NPU roadmaps, power efficiency, industrial vision, and the tradeoff between cloud AI and local inference. The point is not just to talk about AI in general, but to connect practical embedded workflows with current edge AI architectures.

The discussion also highlights how the foundation’s calendar reflects the speed of the sector. The upcoming San Diego event is described as a three-day meeting point with partner exhibition tables, workshops, keynote sessions, and a research track, which fits the broader shift toward tighter interaction between commercial edge AI platforms and academia. That matters because edge AI is now shaped as much by deployment constraints like thermals, bandwidth, privacy, deterministic response, and cost per watt as by raw model capability.

Another useful detail is the partner network itself. The transcript references a community spanning large established players and newer entrants, and that is increasingly where edge AI momentum is coming from: partnerships between silicon companies, board vendors, software ecosystems, and vertical solution providers. Filmed at Embedded World 2026 in Nuremberg, the interview captures that industry mood well, with the foundation positioning itself as a neutral meeting ground for the people building the next generation of embedded AI systems.

source https://www.youtube.com/watch?v=R7_x6TAypg0

Geniatech Edge AI and ePaper at Embedded World 2026: i.MX95, RK3588, Kinara, Hailo

Posted by – March 15, 2026
Category: Exclusive videos

Geniatech presents a broad ARM-based embedded portfolio built around edge AI hardware, BSP-level software work, and customization services rather than a single demo board. The video focuses on how the company combines SoMs, SBCs, gateways, AI boxes and ePaper platforms with kernel, SDK and API support, so customers can move from evaluation to deployment without rebuilding the whole stack. The central theme is local inference on compact ARM systems, where Geniatech positions quantized and compressed LLM and VLM workloads as practical on-device workloads instead of cloud-only tasks. https://www.geniatech.com/

A key part of that story is heterogeneous edge AI acceleration. In the booth tour, Geniatech shows NXP and Rockchip based platforms paired with M.2 AI modules and explains the split between computer-vision accelerators and LLM-oriented parts. That maps well to the company’s current platform direction: i.MX95 systems with optional M.2 expansion, RK3588 designs, and accelerator options such as Kinara for transformer-style workloads or Hailo for CNN-heavy vision pipelines. The interesting angle here is not just raw TOPS, but memory footprint, quantization, driver porting, and how much of the model can realistically stay on the device.

The demo of a local multimodal assistant makes that concrete. A camera-equipped edge box estimates who is in front of it, feeds selected prompts into a locally deployed model, and returns results every few seconds without a cloud round trip. That matters for privacy, latency, and deterministic deployment in retail, kiosks, transport, and industrial settings. Geniatech’s role in this stack is mostly the infrastructure layer: stable ARM hardware, Linux BSP work, accelerator integration, conversion toolchains, NPU APIs, and support for customers training or adapting their own models.

The second half of the video shifts to ePaper, and this is where Geniatech looks unusually vertically integrated. Instead of treating ePaper as just a panel sourcing business, the company talks about its own TCON and software optimization, faster refresh behavior, and end-to-end system design for signage. The bus-stop example, multi-panel drive capability, indoor-light energy harvesting concepts, and wide-temperature operation point to transport and outdoor display use cases where low power draw matters as much as color or refresh performance.

Filmed at Embedded World 2026 in Nuremberg, the booth tour shows Geniatech as a company trying to connect two markets that are starting to overlap: edge AI compute and ultra-low-power visual interfaces. On one side, there is ARM edge hardware with i.MX95, RK3588, AI modules, local LLM support and carrier-board customization. On the other, there are Spectra 6 style color ePaper and alternative reflective display approaches for signage, pricing, and information systems. Put together, it is a practical embedded roadmap for devices that need local intelligence, low power, industrial design flexibility, and long lifecycle support.

All my Embedded World videos are in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

source https://www.youtube.com/watch?v=xwuZf8M2k_E

Microchip Booth Tour at Embedded World 2026: Edge AI, 10BASE-T1S, RISC-V, ADAS, Security

Posted by – March 14, 2026
Category: Exclusive videos

Microchip’s booth tour is less about a single flagship chip and more about how the company is stitching together the embedded stack: edge AI, industrial networking, automotive camera links, HMI, security and power electronics. The demos show Microchip positioning itself as a broad platform vendor, not just a microcontroller supplier, with current emphasis on AIoT, 10BASE-T1S, TSN, Zephyr, Linux, secure MCUs and MPUs, and reference designs that shorten evaluation cycles for OEMs. https://www.microchip.com/en-us/about/events-info/embedded-world

The access-control and cockpit demos reflect two themes that now run through a lot of embedded design: local inference and human-machine interaction. Facial recognition with liveness detection, round-display touch interfaces, and color-sorting machine vision are shown here not as isolated gimmicks but as edge workloads that need low latency, deterministic control and a practical HMI layer. That also fits with Microchip’s current demo lineup around graphics, touch, camera systems and AI at the edge.

A stronger technical thread in the video is networking. The shop-floor setup points to Single Pair Ethernet, especially 10BASE-T1S, as a path away from older fieldbus designs toward IP-based industrial systems with simpler wiring, real-time behavior and easier IT/OT integration. Microchip is explicitly framing this around industrial Ethernet migration, TSN-capable architectures, open-source software stacks and modular evaluation hardware built around boards that can be quickly reconfigured for demos or first customer trials.

Security is treated here as infrastructure rather than a feature checkbox. The tour touches secure boot, secure firmware update, key provisioning, post-quantum cryptography and Cyber Resilience Act readiness, including Microchip’s security portfolio and its work with Kudelski IoT keySTREAM for device provisioning and update workflows. In practice, that makes the video relevant to anyone designing industrial or edge products that now need lifecycle security, not just network connectivity and compute.

The automotive and high-performance pieces round out the picture: ASA-ML serializer/deserializer links for ADAS camera paths into Qualcomm Ride platforms, FPGA-based sensor fusion around AI accelerators, MICROSAR IO with Vector for compact ECUs, and a RISC-V story spanning PolarFire SoC FPGA and the newer PIC64 family. Taken together, the booth shows Microchip pushing toward distributed intelligence where control, networking, security and inference sit closer to the machine, a message delivered from the company’s stand at Embedded World 2026 in Nuremberg.

source https://www.youtube.com/watch?v=2bXmkl934mI

JetBrains Embedded Development with CLion, AI Agents, ESP32, ST, Zephyr, Local AI

Posted by – March 14, 2026
Category: Exclusive videos

JetBrains is framing embedded development less as a board-specific workflow and more as a unified software engineering problem. In this conversation, the focus is CLion as the company’s embedded IDE for C, C++ and Rust, aimed at reducing the fragmentation that comes from switching between vendor SDKs, toolchains, debuggers and separate utilities. The key idea is a consistent developer experience across targets such as Espressif and STMicroelectronics, with support for frameworks like Zephyr and modern build flows around CMake, so firmware work can happen inside one environment instead of being spread across multiple disconnected tools. https://www.jetbrains.com/clion/embedded/

A big part of that story is AI, but in a practical embedded context rather than as a generic chatbot layer. JetBrains shows agent support directly inside the IDE, including Junie, external agents, MCP connectivity and bring-your-own-key workflows, with the emphasis on tool grounding and agent orchestration rather than just the raw model. That matters for firmware teams because the useful part is not only code generation, but being able to trigger project-aware actions such as rebuilds, refreshes, navigation and other IDE-native operations in a controlled way.

The interview also points to a broader shift in embedded engineering: local and on-premises AI is becoming relevant for teams that cannot send code or design data to public cloud services. JetBrains is clearly leaning into that requirement, showing local AI running on NVIDIA hardware and discussing private deployment models for LLM-backed development. For regulated sectors and larger product teams, that makes the IDE part of a secure internal toolchain rather than a thin client to an external service.

What makes the booth discussion interesting is that it connects classic embedded pain points with current software trends. CLion is presented as a bridge between microcontroller and SoC projects, vendor ecosystems, RTOS-oriented work and newer AI-assisted flows, while keeping the core promise around productivity, code intelligence and debugging. Filmed at Embedded World 2026 in Nuremberg, the video captures how JetBrains is positioning embedded work alongside mainstream software development instead of treating it as a separate niche.

The result is a view of embedded development where the IDE becomes the integration layer for toolchains, frameworks, AI agents and secure deployment options. Rather than chasing a single board demo, JetBrains is making the case that teams at companies such as automotive and industrial OEMs need a stable, extensible workspace that can handle Zephyr, ESP-IDF, STM32-class projects, CMake-based builds, Rust support and agentic coding in the same place. That makes this less about one feature and more about how firmware teams may want to structure their workflow over the next few years.

All my Embedded World videos are in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

source https://www.youtube.com/watch?v=cYpC1drBfqg

TeleCANesis at Embedded World 2026: Hub for CAN, Modbus, I2C, Cloud, HMI and AI Data Routing

Posted by – March 14, 2026
Category: Exclusive videos

TeleCANesis is tackling a familiar embedded problem: too many devices, buses and software stacks speaking incompatible dialects. The platform is positioned as thin middleware plus tooling for protocol mapping, message routing and automated code generation, so teams can connect CAN, Modbus, I2C, SPI, RS485, Ethernet and higher-level interfaces without rewriting glue code every time a signal layout changes. In practice, the value is less about “moving data” in the abstract and more about preserving engineering time for product logic, analytics and HMI work. https://telecanesis.com/

What stands out in this demo is the workflow refinement inside the web-based Hub. Codecs are becoming system-wide rather than tied to a single capsule, which makes reuse much cleaner across a blueprint. The new imports flow also looks more practical for DBC-driven design: engineers can ingest a file once, label it, selectively pull only the required messages into each capsule, and later re-import changed definitions instead of rebuilding the whole route map. That is a meaningful shift for teams dealing with evolving vehicle, battery or industrial bus definitions over time.

The use case described here is a good fit for battery systems, domain controllers and other heterogeneous embedded environments where one internal data model has to feed cloud services, databases, HMIs and mobile apps in different formats. Rather than expose every raw signal upstream, TeleCANesis lets developers normalize data internally and publish only the subset that matters to customers or backend services. Filmed at Embedded World 2026 in Nuremberg, the demo also hints at where the product is moving next, with broader plug-in support, updated ingestion in the coming 1.1 release, and recent additions such as CANopen and serial connector plug-ins.

There is also a practical deployment story behind it. The runtime is presented as largely platform-agnostic, with only a thin OS and compiler abstraction layer needing adaptation, which makes ports to new ARM or MCU targets much faster than a typical middleware stack. The company points to support around QNX, Raspberry Pi 4 and 5, Yocto Scarthgap, and integration paths toward HMI frameworks such as Qt, Slint, GL Studio and Unity. That combination makes the tool relevant not only for automotive-style gateways but also for industrial control, robotics and connected equipment.

The AI angle is still early, but the direction makes sense: use AI to inspect an existing project, identify protocols and messages, and pre-build the TeleCANesis blueprint so engineers start from a working draft instead of a blank canvas. For teams building software-defined machines, cloud-connected controllers or AI-assisted products, that could make TeleCANesis a useful bridge between fieldbus data, application logic and agent workflows. The core idea is straightforward: stop hand-coding translation layers every time the system grows, and treat connectivity as a configurable part of the architecture instead of a recurring rewrite.

All my Embedded World videos are in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

source https://www.youtube.com/watch?v=MvX0zdWJ0fY

Makat AI Electronics Procurement, BoM Analysis, Real-Time Pricing, Component Sourcing

Posted by – March 14, 2026
Category: Exclusive videos

Makat is pitching a more data-driven version of open-market component buying: instead of opaque broker calls and manual quote chasing, the platform is built around real-time pricing, availability checks, supplier scoring, and transaction workflows that let a buyer move from BoM analysis to PO placement inside one digital flow. The company frames this as AI-powered independent distribution for OEMs and CMs, with emphasis on shortage management, cost reduction, excess inventory handling, and transparent markup rather than black-box brokering. https://www.makat.ai/

What stands out in this interview is the attempt to turn tactical procurement into something more strategic. The demo revolves around board-level electronics sourcing, where Makat says it can highlight risk, identify alternate distributors, benchmark pricing across multiple supply channels, and show where a customer may be overpaying or exposed to supply disruption. That matters in electronics manufacturing, where line stoppages, allocation pressure, NCNR exposure, and fragmented broker networks still make spot buys expensive and slow to execute.

The AI angle here is not presented as a generic chatbot layer, but as a sourcing and procurement engine: benchmarking supplier quotes, ranking vendors, analyzing stock positions, and automating parts of supplier communication and decision support. In practice, that places the platform somewhere between electronics distribution, supply-chain intelligence, and procurement workflow automation. The interesting claim is not only visibility, but transactability: Makat says it acts as vendor of record, taking ownership of sourcing, logistics, and delivery rather than only recommending where to buy.

Filmed at Embedded World 2026 in Nuremberg, the conversation shows how much the electronics supply chain is shifting toward digital procurement infrastructure. Makat’s message is that the future of component sourcing is less about informal broker relationships and more about comparison analytics, supplier data, workflow automation, and accountable execution. For manufacturers dealing with shortages, alternates, price volatility, and multi-distributor sourcing, that is a relevant change in how component purchasing gets done today.

All my Embedded World videos are in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

source https://www.youtube.com/watch?v=SncbMKIVCtA

Edge Impulse Intelligent Factory at Embedded World 2026: Edge AI, YOLO-Pro, Digital Twin, Local LLM

Posted by – March 14, 2026
Category: Exclusive videos

Edge Impulse frames this demo around a practical factory problem: too many data streams, too little time to turn them into action. The setup combines multi-line visual inspection, model inference, and operator-facing summaries into one edge pipeline, with object detection separating good parts from faulty ones and feeding decisions such as rework, scrap, or continued flow. The point is not AI as a cloud dashboard, but AI as a control layer sitting close to the machine. https://www.edgeimpulse.com/

What stands out is the way several workloads run side by side: four simulated production lines, defect detection, a digital-twin view of the floor, and a local language model interface for querying what is happening in real time. That makes the demo less about a single neural network and more about orchestration across computer vision, telemetry, and human-machine interaction, where latency and determinism matter more than headline model size.

The industrial case is clear. In manufacturing, stoppages are expensive, and even a small delay in inspection or triage can ripple through yield, throughput, and maintenance planning. Running inference on the edge helps keep response times predictable, keeps proprietary production data on premises, and avoids depending on a round trip to the cloud for every decision. That is especially relevant for defect detection, anomaly screening, and line monitoring where reliability has to be built into the stack.

Filmed at Embedded World 2026 in Nuremberg, the demo also shows how edge AI is moving beyond isolated vision nodes toward richer factory software. Edge Impulse positions its YOLO-Pro workflow around embedded industrial vision, while the local LLM layer points to a new operator model where staff can query live plant data in plain language instead of navigating separate dashboards. The result is a compact view of where industrial edge systems are headed: vision, digital twin, and natural-language analytics running together on site.

source https://www.youtube.com/watch?v=Aun0kQt-hH8

Grinn Edge AI SOMs with GenioSOM-360, AstraSOM-261x and ReneSOM-V2H at Embedded World

Posted by – March 14, 2026
Category: Exclusive videos

Grinn presents itself here less as a single-board vendor and more as a rapid productization partner for embedded AI. The core idea is consistent across the booth: take a complex SoC, turn it into a compact system-on-module, add the carrier design and software stack around it, and let customers focus on the actual device instead of rebuilding the low-level platform from zero. That comes through in the PCB inspection robot, the camera modules, and the industrial carrier boards shown in the demo. https://grinn-global.com/

The strongest thread in the video is practical edge vision. One demo uses robot vision and onboard AI to monitor PCB production, while another shows real-time hand-gesture tracking aimed at robotics and human-machine interaction. Rather than presenting AI as a cloud service, Grinn is framing it as local inference on embedded Linux hardware, where latency, power budget, camera input, and I/O integration matter as much as raw TOPS.

The hardware story is also broader than one chipset family. The booth includes a MediaTek-based GenioSOM platform, a Synaptics SL2610 based module shown in camera and industrial formats, and a newly announced GenioSOM-360 positioned as an extremely small module for edge AI designs. That makes the video relevant for developers looking at SOM-based designs for industrial vision, smart cameras, robotics, compact HMI devices, and other products where Ethernet, HDMI, MIPI camera interfaces, and software portability all have to come together on a tight schedule.

Another useful angle is how Grinn uses partner booths to validate its role in the ecosystem. The company’s modules and demos are spread across Synaptics, MediaTek, Würth Elektronik, RS and other stands, which says something important: Grinn is not only shipping modules, but also helping silicon vendors and distributors show real deployable use cases. Filmed at Embedded World 2026 in Nuremberg, the interview captures that middle layer of the embedded market where reference design, carrier integration, BSP work, and fast customization often decide whether an AI concept becomes a shipping product.

Overall, this is a good snapshot of where embedded AI is heading in 2026: smaller SOMs, stronger local vision processing, faster path from evaluation kit to product, and more emphasis on software support alongside hardware. The interesting part is not just the silicon names, but the integration model behind them. Grinn is showing how MediaTek, Synaptics and Renesas class processors can be turned into compact, application-ready platforms for machine vision, gesture recognition, industrial inspection and robotics at the edge today.

source https://www.youtube.com/watch?v=SRkLbeRIfzo

RECOM Low-Voltage High-Current Power Modules from 25A for AI, FPGA, DDR to 150A Multiphase Rails

Posted by – March 13, 2026
Category: Exclusive videos

RECOM is expanding its board-level power portfolio with compact point-of-load modules aimed at the hardest rail in modern digital design: very low voltage at very high current. The discussion centers on new 15A and 25A modules for power-tree design, covering rails for processor cores, DDR and dense digital logic, with output targets down to 0.35V and 0.5V depending on the part. That fills a gap between intermediate bus conversion and the final high-current core rail, where size, efficiency and layout matter most. https://recom-power.com/

The key theme here is what happens when SoCs, FPGAs and AI accelerators keep adding compute density while core voltages keep dropping. Lower voltage helps switching speed, but it pushes current sharply upward, so the power stage has to deliver tens or even hundreds of amps in a very small footprint. RECOM positions these modules as scalable building blocks: 25A per unit, 50A with two devices, and up to 150A through multiphase paralleling, aimed at robotics, machine vision, automotive compute and other embedded platforms with fast load steps.

A major technical point in the interview is transient response. Modern processors can jump from sleep to full activity extremely fast, so the regulator has to react before the rail drifts out of tolerance. RECOM’s adaptive constant-on-time control is presented as a way to respond faster than a conventional clock-cycle-limited loop, while also allowing lower output capacitance. That matters because less capacitance can reduce board area, BOM cost and stored energy on the rail, all while keeping the supply stable during aggressive current swings.

Another important layer is programmability. With PMBus telemetry and control, the module is not just a fixed converter but part of the system architecture. Output voltage can be trimmed very accurately, operating behavior can be tuned for different modes, and voltage margining can match the needs of individual processors characterized at the factory. In practice, that means the rail can be optimized for performance, efficiency and reliability instead of treating power as a static afterthought. The video was filmed at Embedded World 2026 in Nuremberg, where this kind of low-voltage, high-current power delivery is becoming central to embedded AI and high-density compute.

The broader context also matters. RECOM highlights a portfolio that runs from tiny isolated converters to high-power systems, and its latest public messaging around embedded world 2026 also points to discrete power IC and transformer options alongside PoL modules. That makes this launch interesting not just as one new regulator, but as part of a wider push toward configurable, modular power design. For engineers working on next-generation FPGA, SoC and edge AI hardware, the real takeaway is simple: power delivery is now an active design domain, with telemetry, programmability, interleaving, EMI behavior and transient control all shaping what the processor can actually do.

RECOM High-Current PoL Modules, PMBus Control, for FPGA and SoC

RECOM PMBus Power Delivery for SoC and FPGA, 0.35V Rails and 25A PoL Modules

source https://www.youtube.com/watch?v=L91dBTq3rK8