Golioth is acquired by Canonical: Secure Bluetooth OTA, LakeDB and Indirect IoT Device Management

Posted by – March 15, 2026
Category: Exclusive videos

Golioth’s latest demo shows how a non-IP Bluetooth endpoint can be managed through a Bluetooth-to-cellular gateway while staying end-to-end encrypted all the way to the cloud. The gateway forwards traffic, but it cannot inspect payloads or own the security domain, which is a strong fit for industrial sensing, remote peripherals, and indirectly connected devices that still need fleet management, telemetry, and OTA workflows. The broader platform positions this around one control plane for connectivity, data routing, settings, and device lifecycle management. https://golioth.io/

What stands out in the interview is the combination of certificate-based onboarding, cloud-managed settings, streamed sensor data, and firmware rollout to Bluetooth devices that may roam across multiple gateways. In the demo, an accelerometer event is sent upstream, settings are pulled back down from the cloud, and the same path can be used for over-the-air updates. That maps well to real deployments where the endpoint is resource-constrained, intermittently connected, or dependent on another node for backhaul.

The Canonical angle makes the story more important than a single booth demo. Golioth announced on March 3, 2026 that it is now part of Canonical, which helps explain the focus on secure infrastructure, developer tooling, on-prem deployments, and data-sovereignty requirements alongside the managed cloud path. Filmed at Embedded World 2026 in Nuremberg, the discussion gives a practical look at how this stack could sit beside Ubuntu, open-source edge software, and enterprise IoT operations rather than acting as a narrow point product.

There is also a useful architectural point here: Golioth is not limited to Bluetooth. The interview frames Bluetooth as the first implementation of an indirectly connected device model, with the same management pattern extending to CAN, serial, Linux-class hardware, MCU targets, and potentially mesh-capable transports such as OpenThread. That makes the value less about a single radio and more about abstracting the transport layer while keeping a consistent API surface for updates, settings, observability, and device orchestration.

For teams building connected products, this is really a video about secure fleet operations at scale: using CI/CD to publish firmware, targeting subsets of deployed devices through management APIs, validating rollout status, and relying on mechanisms such as MCUboot for image integrity and rollback safety. The result is a clearer picture of how Bluetooth and other non-IP devices can be brought into a modern cloud workflow without giving up security boundaries or developer ergonomics.

source https://www.youtube.com/watch?v=JNguONmVpco

Mobilint ARIES and REGULUS edge AI, MLA400 LLM inference and multi-camera vision

Posted by – March 15, 2026
Category: Exclusive videos

Mobilint frames its edge AI story around efficiency rather than headline TOPS alone. In this booth conversation, the focus is on local inference, cost per watt, and practical deployment formats: USB devices, standalone edge boxes, low-profile PCIe cards, MXM modules, and SoC-class hardware for embedded designs. That fits Mobilint’s broader product stack around the ARIES NPU family, the REGULUS low-power SoC line, and the SDK qb software flow for model conversion and deployment. https://www.mobilint.com/

The demo is really about what edge AI looks like when it is treated as an appliance instead of a cloud extension. Mobilint shows multi-stream computer vision running fully offline, with real-time inference on several video feeds and no dependency on a datacenter link. That makes the pitch relevant for AI security, industrial monitoring, smart city analytics, and other latency-sensitive workloads where privacy, bandwidth, and predictable operating cost matter at the edge.

A big part of the discussion is about scaling from vision to LLM workloads. The speaker describes an M400-class configuration built from four accelerators, aimed at running multiple small language models concurrently and pushing into the roughly 35 to 36 billion parameter range with quantization. That lines up with Mobilint’s current direction: the MLA100 card is positioned around 80 TOPS with 16 GB LPDDR4X and 25 W TDP, while the upcoming MLA400 is presented as a quad-ARIES architecture for higher-throughput workstation and on-prem inference. In that context, the video is less about raw benchmark theater and more about usable local AI for mixed vision and language video.

What makes the booth interesting is the software angle behind the hardware. Mobilint keeps coming back to quantization, compiler tooling, runtime integration, and model adaptation, because edge NPUs live or die by how well they map real models rather than synthetic demos. Its SDK qb is built around framework support for PyTorch, TensorFlow, TFLite and ONNX, with optimization and Int8-oriented deployment aimed at preserving model accuracy while fitting tighter memory and power budgets. That is the practical layer that turns AI silicon into deployable embedded compute.

There is also a broader roadmap underneath the interview. Mobilint has recently been talking about both the ARIES and REGULUS NPU families, with REGULUS targeting compact on-device AI at about 10 TOPS under 3 W and support for 4K video pipelines, while products such as MLX-A1 package the accelerator into a more complete edge box. Seen from Embedded World 2026 in Nuremberg, the message is clear: Mobilint wants to compete where offline inference, multi-camera analytics, quantized LLMs, and power-aware embedded deployment matter more than brute-force datacenter silicon roadmap

source https://www.youtube.com/watch?v=ylvPT1Mlv_g

Toradex Leno, OSM, Verdin i.MX95 and Aquila AM69 edge AI modules

Posted by – March 15, 2026
Category: Exclusive videos

Toradex is positioning its 2026 lineup around a wider spread of system-on-modules, from very small Lenos and OSM designs up to higher-performance Aquila and Verdin families. The key message here is scalability: compact modules for cost-sensitive, high-volume products, and larger pin-compatible platforms for projects that need more I/O, compute, graphics, networking or edge AI. That makes the portfolio relevant for gateways, HMIs, robotics and machine-vision devices, while Toradex keeps leaning on software, documentation and long product life as part of the pitch. https://www.toradex.com/

A big part of the story is the move toward smaller solderable form factors. The 30×30 mm Leno and OSM modules shown here are aimed at designs where pick-and-place assembly, vibration resistance and BOM control matter as much as raw performance. In practice, that means customers can start with a compact module for volume production, while still staying close to the Toradex ecosystem instead of rebuilding everything around a custom board too early.

Further up the stack, Toradex is expanding around NXP’s i.MX 95 and TI’s AM69/TDA4 class of processors. That opens the door to more demanding embedded Linux workloads such as multi-camera vision, industrial control, visual inspection, people counting, robotics and autonomous mobile platforms. In that part of the range, the attraction is not just CPU performance but also integrated NPU, ISP, TSN-capable Ethernet, CAN FD, display pipelines and the kind of mixed real-time plus application processing that industrial OEMs increasingly want at the edge.

The demo also points to how Toradex wants customers to move from module to full platform. Carrier boards such as Clover for Aquila target dense vision and robotics use cases, while industrial gateway products extend the company further into ready-to-deploy edge infrastructure rather than only selling compute modules. That is where the value proposition becomes more complete: SOM, carrier board, BSP, Linux distribution, OTA updates, container workflow and cloud fleet management all tied together in one development path.

What makes the pitch credible is that it is less about a single chip and more about a migration strategy across form factors and price points. The video was filmed at Embedded World 2026 in Nuremberg, and the theme throughout is clear: tiny modules that can still expose Ethernet, display and CAN, midrange platforms built around i.MX 95, and higher-end edge AI with Aquila AM69, all anchored by Torizon OS and Toradex support. The result is a portfolio aimed at companies that need to prototype quickly, then scale without changing software foundations too hard.

All my Embedded World videos are in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

source https://www.youtube.com/watch?v=gvqJLv8yPLM

Blumind AMPL Analog AI at 60 Microwatts for Always-On Audio, Edge Wearables and Vision

Posted by – March 15, 2026
Category: Exclusive videos

Blumind is positioning analog AI as a far-edge compute architecture rather than another digital accelerator story. In this interview, the company outlines how its AMPL platform and BM110 direction target always-on audio inference with extremely low system power, low latency and a direct analog signal path that avoids the usual ADC, DAC and high-speed clock overhead of conventional embedded AI. That makes the pitch especially relevant for wearables, smart glasses, earbuds, remotes and other battery-limited devices where keyword spotting has to stay active all day without burning through the cell. https://blumind.ai/

The key technical claim here is not raw TOPS but energy per inference. Blumind describes a total always-on audio solution around 50 to 60 microwatts, with the chip itself at roughly 20 microamps at 1.8 volts and an analog microphone adding about 20 microamps at 1 volt. In practical terms, that shifts edge AI from “can it run” to “can it remain on continuously” for wake-word detection and other audio-triggered interfaces, which is where always-listening products live or die.

What makes the approach interesting is that the neural network is implemented as dedicated analog hardware rather than as software running on an MCU, CPU, RISC-V or Arm core. The company frames this as a fall-through analog compute network optimized for robustness across process, voltage and temperature variation, while keeping latency low and silicon efficiency high. For embedded engineers, that means a very different design trade-off from standard DSP-plus-microcontroller voice pipelines, especially when standby budget is more important than programmability.

The roadmap goes beyond keyword spotting. Blumind says the same analog architecture can scale from RNN-style audio and time-series workloads toward CNN-based vision tasks and eventually smaller attention or transformer-class models running locally on edge devices. That lines up with the company’s broader messaging around all-analog neural processing in standard CMOS and its push to make the technology available not only as its own ASSP silicon but also as licensable IP for future SoCs and microcontrollers. Filmed at Embedded World 2026 in Nuremberg, this is really a look at how analog inference could carve out a specific role inside next-generation edge AI stacks.

source https://www.youtube.com/watch?v=JWvze2MhVsc

Edge AI Foundation Global Edge AI Community, San Diego 2026, 60+ Partners

Posted by – March 15, 2026
Category: Exclusive videos

Edge AI Foundation is presented here less as a single company than as a coordination layer for the wider edge AI ecosystem: silicon vendors, module makers, toolchains, embedded OEMs, startups, researchers, and system builders working around on-device inference, AIoT, computer vision, sensor fusion, and low-latency AI deployment. The interview frames the foundation as a place where competitors still collaborate, which is a useful way to understand today’s market: edge AI is moving too fast for isolated roadmaps, so shared events, workshops, and cross-vendor discussion have become part of the engineering stack. https://www.edgeaifoundation.org/

What stands out is the mix of audiences and technologies. This is not only for executives or keynote speakers, but also for engineers, program managers, researchers, and developers dealing with real deployment issues such as model optimization, embedded Linux, MCU and MPU design choices, heterogeneous compute, NPU roadmaps, power efficiency, industrial vision, and the tradeoff between cloud AI and local inference. The point is not just to talk about AI in general, but to connect practical embedded workflows with current edge AI architectures.

The discussion also highlights how the foundation’s calendar reflects the speed of the sector. The upcoming San Diego event is described as a three-day meeting point with partner exhibition tables, workshops, keynote sessions, and a research track, which fits the broader shift toward tighter interaction between commercial edge AI platforms and academia. That matters because edge AI is now shaped as much by deployment constraints like thermals, bandwidth, privacy, deterministic response, and cost per watt as by raw model capability.

Another useful detail is the partner network itself. The transcript references a community spanning large established players and newer entrants, and that is increasingly where edge AI momentum is coming from: partnerships between silicon companies, board vendors, software ecosystems, and vertical solution providers. Filmed at Embedded World 2026 in Nuremberg, the interview captures that industry mood well, with the foundation positioning itself as a neutral meeting ground for the people building the next generation of embedded AI systems.

source https://www.youtube.com/watch?v=R7_x6TAypg0

Geniatech Edge AI and ePaper at Embedded World 2026: i.MX95, RK3588, Kinara, Hailo

Posted by – March 15, 2026
Category: Exclusive videos

Geniatech presents a broad ARM-based embedded portfolio built around edge AI hardware, BSP-level software work, and customization services rather than a single demo board. The video focuses on how the company combines SoMs, SBCs, gateways, AI boxes and ePaper platforms with kernel, SDK and API support, so customers can move from evaluation to deployment without rebuilding the whole stack. The central theme is local inference on compact ARM systems, where Geniatech positions quantized and compressed LLM and VLM workloads as practical on-device workloads instead of cloud-only tasks. https://www.geniatech.com/

A key part of that story is heterogeneous edge AI acceleration. In the booth tour, Geniatech shows NXP and Rockchip based platforms paired with M.2 AI modules and explains the split between computer-vision accelerators and LLM-oriented parts. That maps well to the company’s current platform direction: i.MX95 systems with optional M.2 expansion, RK3588 designs, and accelerator options such as Kinara for transformer-style workloads or Hailo for CNN-heavy vision pipelines. The interesting angle here is not just raw TOPS, but memory footprint, quantization, driver porting, and how much of the model can realistically stay on the device.

The demo of a local multimodal assistant makes that concrete. A camera-equipped edge box estimates who is in front of it, feeds selected prompts into a locally deployed model, and returns results every few seconds without a cloud round trip. That matters for privacy, latency, and deterministic deployment in retail, kiosks, transport, and industrial settings. Geniatech’s role in this stack is mostly the infrastructure layer: stable ARM hardware, Linux BSP work, accelerator integration, conversion toolchains, NPU APIs, and support for customers training or adapting their own models.

The second half of the video shifts to ePaper, and this is where Geniatech looks unusually vertically integrated. Instead of treating ePaper as just a panel sourcing business, the company talks about its own TCON and software optimization, faster refresh behavior, and end-to-end system design for signage. The bus-stop example, multi-panel drive capability, indoor-light energy harvesting concepts, and wide-temperature operation point to transport and outdoor display use cases where low power draw matters as much as color or refresh performance.

Filmed at Embedded World 2026 in Nuremberg, the booth tour shows Geniatech as a company trying to connect two markets that are starting to overlap: edge AI compute and ultra-low-power visual interfaces. On one side, there is ARM edge hardware with i.MX95, RK3588, AI modules, local LLM support and carrier-board customization. On the other, there are Spectra 6 style color ePaper and alternative reflective display approaches for signage, pricing, and information systems. Put together, it is a practical embedded roadmap for devices that need local intelligence, low power, industrial design flexibility, and long lifecycle support.

All my Embedded World videos are in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

source https://www.youtube.com/watch?v=xwuZf8M2k_E

Microchip Booth Tour at Embedded World 2026: Edge AI, 10BASE-T1S, RISC-V, ADAS, Security

Posted by – March 14, 2026
Category: Exclusive videos

Microchip’s booth tour is less about a single flagship chip and more about how the company is stitching together the embedded stack: edge AI, industrial networking, automotive camera links, HMI, security and power electronics. The demos show Microchip positioning itself as a broad platform vendor, not just a microcontroller supplier, with current emphasis on AIoT, 10BASE-T1S, TSN, Zephyr, Linux, secure MCUs and MPUs, and reference designs that shorten evaluation cycles for OEMs. https://www.microchip.com/en-us/about/events-info/embedded-world

The access-control and cockpit demos reflect two themes that now run through a lot of embedded design: local inference and human-machine interaction. Facial recognition with liveness detection, round-display touch interfaces, and color-sorting machine vision are shown here not as isolated gimmicks but as edge workloads that need low latency, deterministic control and a practical HMI layer. That also fits with Microchip’s current demo lineup around graphics, touch, camera systems and AI at the edge.

A stronger technical thread in the video is networking. The shop-floor setup points to Single Pair Ethernet, especially 10BASE-T1S, as a path away from older fieldbus designs toward IP-based industrial systems with simpler wiring, real-time behavior and easier IT/OT integration. Microchip is explicitly framing this around industrial Ethernet migration, TSN-capable architectures, open-source software stacks and modular evaluation hardware built around boards that can be quickly reconfigured for demos or first customer trials.

Security is treated here as infrastructure rather than a feature checkbox. The tour touches secure boot, secure firmware update, key provisioning, post-quantum cryptography and Cyber Resilience Act readiness, including Microchip’s security portfolio and its work with Kudelski IoT keySTREAM for device provisioning and update workflows. In practice, that makes the video relevant to anyone designing industrial or edge products that now need lifecycle security, not just network connectivity and compute.

The automotive and high-performance pieces round out the picture: ASA-ML serializer/deserializer links for ADAS camera paths into Qualcomm Ride platforms, FPGA-based sensor fusion around AI accelerators, MICROSAR IO with Vector for compact ECUs, and a RISC-V story spanning PolarFire SoC FPGA and the newer PIC64 family. Taken together, the booth shows Microchip pushing toward distributed intelligence where control, networking, security and inference sit closer to the machine, a message delivered from the company’s stand at Embedded World 2026 in Nuremberg.

source https://www.youtube.com/watch?v=2bXmkl934mI

JetBrains Embedded Development with CLion, AI Agents, ESP32, ST, Zephyr, Local AI

Posted by – March 14, 2026
Category: Exclusive videos

JetBrains is framing embedded development less as a board-specific workflow and more as a unified software engineering problem. In this conversation, the focus is CLion as the company’s embedded IDE for C, C++ and Rust, aimed at reducing the fragmentation that comes from switching between vendor SDKs, toolchains, debuggers and separate utilities. The key idea is a consistent developer experience across targets such as Espressif and STMicroelectronics, with support for frameworks like Zephyr and modern build flows around CMake, so firmware work can happen inside one environment instead of being spread across multiple disconnected tools. https://www.jetbrains.com/clion/embedded/

A big part of that story is AI, but in a practical embedded context rather than as a generic chatbot layer. JetBrains shows agent support directly inside the IDE, including Junie, external agents, MCP connectivity and bring-your-own-key workflows, with the emphasis on tool grounding and agent orchestration rather than just the raw model. That matters for firmware teams because the useful part is not only code generation, but being able to trigger project-aware actions such as rebuilds, refreshes, navigation and other IDE-native operations in a controlled way.

The interview also points to a broader shift in embedded engineering: local and on-premises AI is becoming relevant for teams that cannot send code or design data to public cloud services. JetBrains is clearly leaning into that requirement, showing local AI running on NVIDIA hardware and discussing private deployment models for LLM-backed development. For regulated sectors and larger product teams, that makes the IDE part of a secure internal toolchain rather than a thin client to an external service.

What makes the booth discussion interesting is that it connects classic embedded pain points with current software trends. CLion is presented as a bridge between microcontroller and SoC projects, vendor ecosystems, RTOS-oriented work and newer AI-assisted flows, while keeping the core promise around productivity, code intelligence and debugging. Filmed at Embedded World 2026 in Nuremberg, the video captures how JetBrains is positioning embedded work alongside mainstream software development instead of treating it as a separate niche.

The result is a view of embedded development where the IDE becomes the integration layer for toolchains, frameworks, AI agents and secure deployment options. Rather than chasing a single board demo, JetBrains is making the case that teams at companies such as automotive and industrial OEMs need a stable, extensible workspace that can handle Zephyr, ESP-IDF, STM32-class projects, CMake-based builds, Rust support and agentic coding in the same place. That makes this less about one feature and more about how firmware teams may want to structure their workflow over the next few years.

All my Embedded World videos are in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

source https://www.youtube.com/watch?v=cYpC1drBfqg

TeleCANesis at Embedded World 2026: Hub for CAN, Modbus, I2C, Cloud, HMI and AI Data Routing

Posted by – March 14, 2026
Category: Exclusive videos

TeleCANesis is tackling a familiar embedded problem: too many devices, buses and software stacks speaking incompatible dialects. The platform is positioned as thin middleware plus tooling for protocol mapping, message routing and automated code generation, so teams can connect CAN, Modbus, I2C, SPI, RS485, Ethernet and higher-level interfaces without rewriting glue code every time a signal layout changes. In practice, the value is less about “moving data” in the abstract and more about preserving engineering time for product logic, analytics and HMI work. https://telecanesis.com/

What stands out in this demo is the workflow refinement inside the web-based Hub. Codecs are becoming system-wide rather than tied to a single capsule, which makes reuse much cleaner across a blueprint. The new imports flow also looks more practical for DBC-driven design: engineers can ingest a file once, label it, selectively pull only the required messages into each capsule, and later re-import changed definitions instead of rebuilding the whole route map. That is a meaningful shift for teams dealing with evolving vehicle, battery or industrial bus definitions over time.

The use case described here is a good fit for battery systems, domain controllers and other heterogeneous embedded environments where one internal data model has to feed cloud services, databases, HMIs and mobile apps in different formats. Rather than expose every raw signal upstream, TeleCANesis lets developers normalize data internally and publish only the subset that matters to customers or backend services. Filmed at Embedded World 2026 in Nuremberg, the demo also hints at where the product is moving next, with broader plug-in support, updated ingestion in the coming 1.1 release, and recent additions such as CANopen and serial connector plug-ins.

There is also a practical deployment story behind it. The runtime is presented as largely platform-agnostic, with only a thin OS and compiler abstraction layer needing adaptation, which makes ports to new ARM or MCU targets much faster than a typical middleware stack. The company points to support around QNX, Raspberry Pi 4 and 5, Yocto Scarthgap, and integration paths toward HMI frameworks such as Qt, Slint, GL Studio and Unity. That combination makes the tool relevant not only for automotive-style gateways but also for industrial control, robotics and connected equipment.

The AI angle is still early, but the direction makes sense: use AI to inspect an existing project, identify protocols and messages, and pre-build the TeleCANesis blueprint so engineers start from a working draft instead of a blank canvas. For teams building software-defined machines, cloud-connected controllers or AI-assisted products, that could make TeleCANesis a useful bridge between fieldbus data, application logic and agent workflows. The core idea is straightforward: stop hand-coding translation layers every time the system grows, and treat connectivity as a configurable part of the architecture instead of a recurring rewrite.

All my Embedded World videos are in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

source https://www.youtube.com/watch?v=MvX0zdWJ0fY

Makat AI Electronics Procurement, BoM Analysis, Real-Time Pricing, Component Sourcing

Posted by – March 14, 2026
Category: Exclusive videos

Makat is pitching a more data-driven version of open-market component buying: instead of opaque broker calls and manual quote chasing, the platform is built around real-time pricing, availability checks, supplier scoring, and transaction workflows that let a buyer move from BoM analysis to PO placement inside one digital flow. The company frames this as AI-powered independent distribution for OEMs and CMs, with emphasis on shortage management, cost reduction, excess inventory handling, and transparent markup rather than black-box brokering. https://www.makat.ai/

What stands out in this interview is the attempt to turn tactical procurement into something more strategic. The demo revolves around board-level electronics sourcing, where Makat says it can highlight risk, identify alternate distributors, benchmark pricing across multiple supply channels, and show where a customer may be overpaying or exposed to supply disruption. That matters in electronics manufacturing, where line stoppages, allocation pressure, NCNR exposure, and fragmented broker networks still make spot buys expensive and slow to execute.

The AI angle here is not presented as a generic chatbot layer, but as a sourcing and procurement engine: benchmarking supplier quotes, ranking vendors, analyzing stock positions, and automating parts of supplier communication and decision support. In practice, that places the platform somewhere between electronics distribution, supply-chain intelligence, and procurement workflow automation. The interesting claim is not only visibility, but transactability: Makat says it acts as vendor of record, taking ownership of sourcing, logistics, and delivery rather than only recommending where to buy.

Filmed at Embedded World 2026 in Nuremberg, the conversation shows how much the electronics supply chain is shifting toward digital procurement infrastructure. Makat’s message is that the future of component sourcing is less about informal broker relationships and more about comparison analytics, supplier data, workflow automation, and accountable execution. For manufacturers dealing with shortages, alternates, price volatility, and multi-distributor sourcing, that is a relevant change in how component purchasing gets done today.

All my Embedded World videos are in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

source https://www.youtube.com/watch?v=SncbMKIVCtA

Edge Impulse Intelligent Factory at Embedded World 2026: Edge AI, YOLO-Pro, Digital Twin, Local LLM

Posted by – March 14, 2026
Category: Exclusive videos

Edge Impulse frames this demo around a practical factory problem: too many data streams, too little time to turn them into action. The setup combines multi-line visual inspection, model inference, and operator-facing summaries into one edge pipeline, with object detection separating good parts from faulty ones and feeding decisions such as rework, scrap, or continued flow. The point is not AI as a cloud dashboard, but AI as a control layer sitting close to the machine. https://www.edgeimpulse.com/

What stands out is the way several workloads run side by side: four simulated production lines, defect detection, a digital-twin view of the floor, and a local language model interface for querying what is happening in real time. That makes the demo less about a single neural network and more about orchestration across computer vision, telemetry, and human-machine interaction, where latency and determinism matter more than headline model size.

The industrial case is clear. In manufacturing, stoppages are expensive, and even a small delay in inspection or triage can ripple through yield, throughput, and maintenance planning. Running inference on the edge helps keep response times predictable, keeps proprietary production data on premises, and avoids depending on a round trip to the cloud for every decision. That is especially relevant for defect detection, anomaly screening, and line monitoring where reliability has to be built into the stack.

Filmed at Embedded World 2026 in Nuremberg, the demo also shows how edge AI is moving beyond isolated vision nodes toward richer factory software. Edge Impulse positions its YOLO-Pro workflow around embedded industrial vision, while the local LLM layer points to a new operator model where staff can query live plant data in plain language instead of navigating separate dashboards. The result is a compact view of where industrial edge systems are headed: vision, digital twin, and natural-language analytics running together on site.

source https://www.youtube.com/watch?v=Aun0kQt-hH8

Grinn Edge AI SOMs with GenioSOM-360, AstraSOM-261x and ReneSOM-V2H at Embedded World

Posted by – March 14, 2026
Category: Exclusive videos

Grinn presents itself here less as a single-board vendor and more as a rapid productization partner for embedded AI. The core idea is consistent across the booth: take a complex SoC, turn it into a compact system-on-module, add the carrier design and software stack around it, and let customers focus on the actual device instead of rebuilding the low-level platform from zero. That comes through in the PCB inspection robot, the camera modules, and the industrial carrier boards shown in the demo. https://grinn-global.com/

The strongest thread in the video is practical edge vision. One demo uses robot vision and onboard AI to monitor PCB production, while another shows real-time hand-gesture tracking aimed at robotics and human-machine interaction. Rather than presenting AI as a cloud service, Grinn is framing it as local inference on embedded Linux hardware, where latency, power budget, camera input, and I/O integration matter as much as raw TOPS.

The hardware story is also broader than one chipset family. The booth includes a MediaTek-based GenioSOM platform, a Synaptics SL2610 based module shown in camera and industrial formats, and a newly announced GenioSOM-360 positioned as an extremely small module for edge AI designs. That makes the video relevant for developers looking at SOM-based designs for industrial vision, smart cameras, robotics, compact HMI devices, and other products where Ethernet, HDMI, MIPI camera interfaces, and software portability all have to come together on a tight schedule.

Another useful angle is how Grinn uses partner booths to validate its role in the ecosystem. The company’s modules and demos are spread across Synaptics, MediaTek, Würth Elektronik, RS and other stands, which says something important: Grinn is not only shipping modules, but also helping silicon vendors and distributors show real deployable use cases. Filmed at Embedded World 2026 in Nuremberg, the interview captures that middle layer of the embedded market where reference design, carrier integration, BSP work, and fast customization often decide whether an AI concept becomes a shipping product.

Overall, this is a good snapshot of where embedded AI is heading in 2026: smaller SOMs, stronger local vision processing, faster path from evaluation kit to product, and more emphasis on software support alongside hardware. The interesting part is not just the silicon names, but the integration model behind them. Grinn is showing how MediaTek, Synaptics and Renesas class processors can be turned into compact, application-ready platforms for machine vision, gesture recognition, industrial inspection and robotics at the edge today.

source https://www.youtube.com/watch?v=SRkLbeRIfzo

RECOM Low-Voltage High-Current Power Modules from 25A for AI, FPGA, DDR to 150A Multiphase Rails

Posted by – March 13, 2026
Category: Exclusive videos

RECOM is expanding its board-level power portfolio with compact point-of-load modules aimed at the hardest rail in modern digital design: very low voltage at very high current. The discussion centers on new 15A and 25A modules for power-tree design, covering rails for processor cores, DDR and dense digital logic, with output targets down to 0.35V and 0.5V depending on the part. That fills a gap between intermediate bus conversion and the final high-current core rail, where size, efficiency and layout matter most. https://recom-power.com/

The key theme here is what happens when SoCs, FPGAs and AI accelerators keep adding compute density while core voltages keep dropping. Lower voltage helps switching speed, but it pushes current sharply upward, so the power stage has to deliver tens or even hundreds of amps in a very small footprint. RECOM positions these modules as scalable building blocks: 25A per unit, 50A with two devices, and up to 150A through multiphase paralleling, aimed at robotics, machine vision, automotive compute and other embedded platforms with fast load steps.

A major technical point in the interview is transient response. Modern processors can jump from sleep to full activity extremely fast, so the regulator has to react before the rail drifts out of tolerance. RECOM’s adaptive constant-on-time control is presented as a way to respond faster than a conventional clock-cycle-limited loop, while also allowing lower output capacitance. That matters because less capacitance can reduce board area, BOM cost and stored energy on the rail, all while keeping the supply stable during aggressive current swings.

Another important layer is programmability. With PMBus telemetry and control, the module is not just a fixed converter but part of the system architecture. Output voltage can be trimmed very accurately, operating behavior can be tuned for different modes, and voltage margining can match the needs of individual processors characterized at the factory. In practice, that means the rail can be optimized for performance, efficiency and reliability instead of treating power as a static afterthought. The video was filmed at Embedded World 2026 in Nuremberg, where this kind of low-voltage, high-current power delivery is becoming central to embedded AI and high-density compute.

The broader context also matters. RECOM highlights a portfolio that runs from tiny isolated converters to high-power systems, and its latest public messaging around embedded world 2026 also points to discrete power IC and transformer options alongside PoL modules. That makes this launch interesting not just as one new regulator, but as part of a wider push toward configurable, modular power design. For engineers working on next-generation FPGA, SoC and edge AI hardware, the real takeaway is simple: power delivery is now an active design domain, with telemetry, programmability, interleaving, EMI behavior and transient control all shaping what the processor can actually do.

RECOM High-Current PoL Modules, PMBus Control, for FPGA and SoC

RECOM PMBus Power Delivery for SoC and FPGA, 0.35V Rails and 25A PoL Modules

source https://www.youtube.com/watch?v=L91dBTq3rK8

RECOM 65W GaN AC/DC, 1200W Fanless PMBus PSU, 2U DIN Rail Power

Posted by – March 13, 2026
Category: Exclusive videos

RECOM is showing how far compact AC/DC design has moved when mechanical compatibility stays fixed but output power climbs sharply. The headline part here is the new 65W PCB-mount AC/DC family, presented in the same footprint and pinout as an earlier 30W generation, so designers can scale power without rerouting the board or redesigning the front end. The move to GaN switching is central: faster switching, higher efficiency, smaller magnetics and better power density all show up directly in the module size, transformer reduction and lower material use. https://recom-power.com/

What makes that interesting is not only density, but migration path. A pin-compatible upgrade from lower power to 65W is useful for products that start with one load profile and later need more headroom, whether that is for industrial control, embedded compute, test equipment or medical electronics. The open-frame variant shown in the interview pushes the same platform into chassis-mount use, with integrated surge handling and common-mode filtering aimed at installations where grounding, EMI and earth-loop behavior matter more than in a floating-output board design.

The bigger power story is the fanless 1200W class. RECOM’s RACM1200-V platform is built around baseplate cooling, up to 1000W continuous fanless output with 1200W boost, PMBus visibility, and digital control for monitoring, fault handling and application-specific behavior. That makes it relevant for medical, industrial and automation systems where acoustics, reliability and service life often matter more than adding a fan. The interview also touches on firmware tuning, power limiting and protection strategy, which is increasingly where power supplies become part of the system architecture rather than just a power brick.

Another practical angle is cabinet density. RECOM’s newer ultra-slim DIN-rail family uses a 2U step-shape format for 30W, 60W and 90W versions, keeping the same width while pushing higher output into flat distribution panels and home or building automation cabinets. The 90W version is especially notable because RECOM positions it against wider conventional alternatives, with high efficiency, push-in terminals, audible-noise suppression and tighter panel utilization. Filmed at Embedded World 2026 in Nuremberg, the discussion ties together GaN, thermal design, EMC filtering, PMBus telemetry and mechanical standardization in a way that feels very relevant to current embedded power design.

Overall, this is less about one isolated launch and more about RECOM’s broader direction: higher power density where GaN makes sense, digital control at higher wattage, and space-efficient AC/DC form factors for embedded and automation installs. The useful takeaway is that smaller magnetics, slimmer DIN-rail geometry, conduction-cooled kilowatt supplies and drop-in board upgrades are all converging toward the same goal: more power in less volume, with fewer compromises in certification, thermal behavior and integration effort.

source https://www.youtube.com/watch?v=-hISqLa3kmg

Thistle Technologies Edge AI Security, Secure Boot, OTA Updates, Model Signing

Posted by – March 13, 2026
Category: Exclusive videos

Thistle Technologies is tackling a familiar embedded problem: the industry knows what strong security should look like, but secure boot, signed firmware, encrypted updates, hardware root of trust integration, and key handling still take too much board-specific work for most teams. This interview explains how Thistle is trying to compress that effort from months into hours by giving device makers one platform for secure boot enablement, OTA orchestration, firmware signing, release control, and now protected Edge AI model deployment. https://thistle.tech/product

A key point here is that AI models on embedded devices now need the same trust chain as firmware. Thistle’s approach is to sign, encrypt, version, and verify models back to hardware so the device can confirm it is running the intended model rather than an injected or tampered payload. That matters for Edge AI pipelines where models change frequently, but provenance, integrity, and anti-extraction controls have to stay intact across deployment and update cycles. Embedded Computing Design’s 2026 Best in Show coverage frames this as hardware-anchored trust, model signing, provenance tracking, and protected delivery for Edge AI systems.

The demos make that concrete across very different hardware classes: small MCU-scale targets, Linux systems, Qualcomm platforms, MediaTek designs, and boards using Infineon OPTIGA Trust M. What stands out is the unified control plane: one backend for secure OTA, encrypted firmware bundles, model rollout, and version management across heterogeneous fleets. Thistle’s own product material also highlights CI/CD-oriented release tooling and Cloud KMS-backed signing flows, which fits well with what is shown in the interview about practical key management instead of passing secrets around on laptops or USB sticks.

Another layer in the discussion is regulation. The video was filmed at Embedded World 2026 in Nuremberg, where security and lifecycle maintenance were major themes, and Thistle explicitly connects its stack to Europe’s Cyber Resilience Act. That alignment makes sense: CRA preparation is pushing manufacturers toward secure-by-design architectures, authenticated updates, vulnerability handling, and long-term maintenance for connected products. In that context, the value here is not a vague “security platform” pitch but a workflow that ties silicon security features, software release discipline, and field update reliability into one operational path.

The most interesting part of the conversation is also the most realistic one: nobody claims 100% security. Instead, the argument is that embedded systems controlling physical processes, infrastructure, robotics, and safety-relevant equipment can no longer accept weak boot chains, ad hoc signing, or unsecured model refresh. For teams shipping connected products with Edge AI, this is really about reducing attack surface while keeping deployment practical: secure boot, encrypted OTA, hardware-backed key custody, model verification, and fleet-wide update management brought into a single repeatable flow.

source https://www.youtube.com/watch?v=dbkKcFbHaOw

RECOM discrete DC/DC solutions, isolated power ICs and SMD transformers explained

Posted by – March 13, 2026
Category: Exclusive videos

RECOM is broadening its power portfolio beyond classic modules and into discrete isolated DC/DC building blocks, giving design teams a more flexible path from concept to production. The key idea in this interview is not just component availability, but a structured design flow built around matched power ICs, SMD transformers, and ready-made discrete reference solutions. Instead of forcing engineers to choose between a fully integrated module and a fully custom analog design from scratch, RECOM is positioning itself in the middle with pre-matched combinations that remove much of the uncertainty from isolated power design. https://recom-power.com/

What makes the concept interesting is the “your design, your choice” approach. An engineer can start with only the IC, select an IC plus a validated matching transformer, or order a complete discrete low-power isolated DC/DC implementation prepared by RECOM. That matters because transformer-driver matching is often where discrete converter design becomes slow and risky, especially when magnetics, topology, isolation constraints, and board-level integration all have to line up at once.

The technical focus is clearly on low-power isolated DC/DC conversion, where the interplay between the controller IC and the transformer largely defines whether the design behaves properly. RECOM highlights very small ICs, compact SMD transformers, and board-level discrete solutions that can be tested directly in an application. This gives developers a way to evaluate isolated converter behavior, tune system requirements, and decide whether a modular converter, a semi-custom discrete stage, or individual discrete parts is the better fit for cost, layout, and product differentiation.

The main value proposition here is speed. RECOM says it can deliver a ready discrete solution within 20 days, which shifts the conversation from pure component sourcing to design acceleration and faster time to market. For embedded developers working on industrial, communications, automation, or edge electronics, that can be more important than squeezing out a marginal efficiency gain, because the real bottleneck is often engineering time, validation effort, and getting hardware into the field quickly. The video was filmed at Embedded World 2026 in Nuremberg, where this launch was presented as a bridge between RECOM’s established module business and a new discrete power strategy.

Overall, the story is about giving engineers more control without pushing all the risk back onto them. RECOM is using the know-how it built through years of DC/DC module design and exposing part of that expertise through matched IC-transformer pairs and pre-built discrete solutions. That turns isolated power from a slow, magnetics-heavy design exercise into something closer to a configurable platform, which is a notable shift for teams that need isolation, compact SMD implementation, and faster prototyping without abandoning the option of deeper customization later on.

source https://www.youtube.com/watch?v=f6SsrygbdEk

Renesas RH850/U2B at Embedded World 2026, Motor Control, FFT, Zonal Controller

Posted by – March 13, 2026
Category: Exclusive videos

Renesas is showing a very practical side of the RH850/U2B here: how an automotive MCU can tackle a noisy BLDC motor with visible torque ripple, vibration, and cogging, then smooth it out with a dedicated compensation algorithm. Instead of framing motor control as an abstract benchmark, this demo makes the effect easy to hear, feel, and measure through the FFT view and the before/after response of the system. https://www.renesas.com/en/products/rh850-u2b

The key technical point is hardware offload. In this setup, the compensation workload runs on the RH850/U2B embedded hardware accelerator rather than relying only on the main CPU cores, which cuts the control cycle time from roughly 15.4 microseconds to about 5 microseconds. That kind of latency reduction matters in inverter and motor-control loops because it improves response, reduces ripple, and helps push precision further at low speed where cogging effects are easy to notice.

What makes the demo more relevant than a simple motor-control board is where Renesas positions the device. RH850/U2B is part of its cross-domain automotive MCU family, aimed at zonal controllers and unified ECU designs where motor control, safety, security, and real-time processing increasingly need to coexist on one device. The discussion around ASIL certification, EVITA Full capability, multi-core processing, and lockstep support places this clearly in the context of modern vehicle E/E architecture rather than a standalone industrial drive demo.

Filmed at Embedded World 2026 in Nuremberg, the demo is a good example of how Renesas is linking motor-control quality to broader automotive compute trends: hardware acceleration, deterministic timing, functional safety, cybersecurity, and domain integration. The result shown here is simple but meaningful: lower acoustic noise, lower vibration, faster execution, and a more efficient control path for EV, HEV, actuator, and zonal automotive applications.

source https://www.youtube.com/watch?v=7-LnA57KlGo

Yocto Project at Embedded World 2026: LTS, SBOM, BitBake, RISC-V, Embedded Linux

Posted by – March 13, 2026
Category: Exclusive videos

This conversation frames Yocto less as a single distro and more as the infrastructure layer many embedded Linux teams eventually need once products move beyond quick demos. The interview highlights why developers keep coming back to it: reproducible builds, minimal images, board bring-up, source mirroring, A/B update workflows, and a build system that only pulls in what the target actually needs. That matters for performance, maintenance, and attack surface, especially when long-lived devices are deployed in volume. https://www.yoctoproject.org/

A big theme here is maintainability over time. The speakers point to the next Yocto LTS cycle, with four years of support, as a practical answer for product teams facing long qualification windows and regulatory pressure. Security is presented in a very concrete way: SBOM generation, vulnerability scanning, CVE tracking, and the ability to rebuild images quickly when fixes land. That makes Yocto relevant not just for BSP work and image creation, but for Cyber Resilience Act readiness and ongoing fleet maintenance in the field.

What also comes through is how much of Yocto’s value sits in BitBake and the surrounding workflow rather than in any single package set. The discussion around bitbake-setup, shared sstate cache, layer configuration, and reusable board support shows why experienced engineers see it as a build framework rather than just another embedded Linux option. First builds may take time, but incremental rebuilds, cache reuse across projects, and structured metadata make the system much more scalable once teams are juggling multiple products, branches, and hardware targets at once.

The interview also gives a useful view of Yocto’s hardware reach. ARM is treated as routine, cross-compilation is normal, and RISC-V now feels more strategic than experimental, with community layers, board support, and stronger testing infrastructure getting more attention. There is also an interesting hint that Yocto thinking may spread beyond classic embedded targets, especially through meta-virtualization, container image construction, multi-architecture builds, and ultra-small deployable runtimes where provenance and SBOM detail matter a lot.

Just as important, this is a story about community process. The speakers are candid about what works well and what still needs refinement, from mailing-list driven contribution flow to newer GitHub-style expectations, and from volunteer patch flow to paid maintainers, release management, and LTS coordination funded by members. Filmed at Embedded World 2026 in Nuremberg, the video ends up showing Yocto as a mature, open, vendor-neutral build ecosystem for embedded Linux, where security, reproducibility, board enablement, and long-term support are all tied together in one stack.

source https://www.youtube.com/watch?v=YPjoayYbosQ

Renesas RZ/V2H and RZ/V2N Robotics Demo, Gesture AI, Voice Control, ROS 2

Posted by – March 12, 2026
Category: Exclusive videos

Renesas uses this demo to show how edge AI is moving from simple vision classification into closed-loop robot control. The first setup combines an off-the-shelf dexterous hand with an RZ/V2H board, where a camera tracks human hand gestures, runs local inference, and maps the result to motors and axes so the robot hand mirrors the operator in real time. It is a practical example of embedded vision, gesture recognition, motor control, and low-latency human-machine interaction coming together on one platform. https://www.renesas.com/en

What makes the RZ/V2H part interesting here is not just raw AI throughput, but the system balance behind it. Renesas positions it for robotics and vision AI with multicore processing, DRP-AI acceleration, image-processing capability, and support for multiple camera streams, which fits workloads such as hand tracking, perception fusion, and coordinated motion. In this context the demo is less about a robotic hand alone and more about how sensor input, inference, and actuator control can be collapsed into a compact edge robotics design.

The second demo shifts toward collaborative robotics and tool assistance. Here, a robotic arm based on the RZ/V2N platform accepts both voice commands and hand gestures, running in a ROS 2 architecture to identify a requested tool, move to the right position, and present it to the operator. That makes the story broader than vision AI: it becomes a multimodal interface problem involving speech, gesture, robot middleware, task flow, and safe human-robot collaboration on the edge.

MXT’s role adds another useful layer, because this is not only a silicon story but also an ecosystem story. As a Renesas preferred partner, MXT has worked with Renesas across modules, evaluation kits, and custom boards, and the board shown here is described as a Raspberry Pi form factor design that can work with existing expansion hardware. That matters for faster prototyping, easier integration, and lower friction when developers want to move from proof of concept to a more product-like robotics platform.

Seen from Embedded World 2026 in Nuremberg, these demos reflect where industrial and service robotics are heading: more cameras, more AI models, more joints, more natural interfaces, and tighter integration between Linux, ROS 2, vision pipelines, and motor control. The most useful takeaway is not hype around humanoids, but the way Renesas is stacking practical building blocks for gesture-controlled manipulators, voice-driven cobots, and embedded robot perception where latency, power, and system cost still matter.

source https://www.youtube.com/watch?v=-9ba3hnz_ek

Renesas Robotics Sensor Tech at Embedded World 2026, Edge AI, Force Sensing, Predictive Maintenance

Posted by – March 12, 2026
Category: Exclusive videos

Renesas frames this demo around sensing as a core building block for edge AI, robotics, mobility, and industrial automation. The focus is not on one isolated component but on how force sensing, position sensing, impedance sensing, and low-footprint embedded intelligence can be combined into compact actuator and HMI designs that are precise, robust, and realistic to scale in production. https://www.renesas.com/IPS

The robotic hand is a good example of that direction. Instead of simple fingertip touch, the demo shows full-finger force measurement, so grip strength and the force curve over time can be tracked as the grasp develops. That matters for dexterous manipulation, safe human-robot interaction, and more natural motion control, where the system must regulate pressure finely enough to hold fragile objects without instability or slip.

A second theme is robotic joint feedback. Renesas positions inductive, magnet-free sensing as a practical fit for humanoid and industrial robot joints because it can deliver absolute position information, high resolution, immunity to stray magnetic fields, and better robustness against moisture, vibration, dust, and electromagnetic disturbance. That lines up with the company’s newer inductive position sensor push, including parts such as the RAA2P3226 for robotic joints, where compact integration, low latency, and tight angular accuracy are critical for servo control and coordinated motion.

The mobility demo extends that sensing approach into the human-machine interface. The scooter handle detects whether both hands are present using impedance sensing rather than conventional capacitive touch, which improves operation with gloves and in humid or wet conditions. Renesas is also emphasizing more complete reference algorithms around these sensors, so OEMs can tune sensitivity and recognition behavior in software without starting from scratch, which is often what product teams need when time-to-design is tight.

The final part of the video is about edge intelligence in a more literal sense: sensor data processed locally on a modest 32-bit microcontroller to infer things that are not directly measured, such as leakage, friction, or load change for predictive maintenance. That is a useful distinction in industrial sensing because it keeps latency, memory demand, power budget, and system cost under control while still enabling condition monitoring. Filmed at Embedded World 2026 in Nuremberg, the demo shows Renesas pushing sensors beyond raw measurement toward embedded perception for robotics, micromobility, and Industry 4.0.

source https://www.youtube.com/watch?v=qjhmr43MScA