JetBrains Embedded Development with CLion, AI Agents, ESP32, ST, Zephyr, Local AI

Posted by – March 14, 2026
Category: Exclusive videos

JetBrains is framing embedded development less as a board-specific workflow and more as a unified software engineering problem. In this conversation, the focus is CLion as the company’s embedded IDE for C, C++ and Rust, aimed at reducing the fragmentation that comes from switching between vendor SDKs, toolchains, debuggers and separate utilities. The key idea is a consistent developer experience across targets such as Espressif and STMicroelectronics, with support for frameworks like Zephyr and modern build flows around CMake, so firmware work can happen inside one environment instead of being spread across multiple disconnected tools. https://www.jetbrains.com/clion/embedded/

A big part of that story is AI, but in a practical embedded context rather than as a generic chatbot layer. JetBrains shows agent support directly inside the IDE, including Junie, external agents, MCP connectivity and bring-your-own-key workflows, with the emphasis on tool grounding and agent orchestration rather than just the raw model. That matters for firmware teams because the useful part is not only code generation, but being able to trigger project-aware actions such as rebuilds, refreshes, navigation and other IDE-native operations in a controlled way.

The interview also points to a broader shift in embedded engineering: local and on-premises AI is becoming relevant for teams that cannot send code or design data to public cloud services. JetBrains is clearly leaning into that requirement, showing local AI running on NVIDIA hardware and discussing private deployment models for LLM-backed development. For regulated sectors and larger product teams, that makes the IDE part of a secure internal toolchain rather than a thin client to an external service.

What makes the booth discussion interesting is that it connects classic embedded pain points with current software trends. CLion is presented as a bridge between microcontroller and SoC projects, vendor ecosystems, RTOS-oriented work and newer AI-assisted flows, while keeping the core promise around productivity, code intelligence and debugging. Filmed at Embedded World 2026 in Nuremberg, the video captures how JetBrains is positioning embedded work alongside mainstream software development instead of treating it as a separate niche.

The result is a view of embedded development where the IDE becomes the integration layer for toolchains, frameworks, AI agents and secure deployment options. Rather than chasing a single board demo, JetBrains is making the case that teams at companies such as automotive and industrial OEMs need a stable, extensible workspace that can handle Zephyr, ESP-IDF, STM32-class projects, CMake-based builds, Rust support and agentic coding in the same place. That makes this less about one feature and more about how firmware teams may want to structure their workflow over the next few years.

All my Embedded World videos are in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

source https://www.youtube.com/watch?v=cYpC1drBfqg

TeleCANesis at Embedded World 2026: Hub for CAN, Modbus, I2C, Cloud, HMI and AI Data Routing

Posted by – March 14, 2026
Category: Exclusive videos

TeleCANesis is tackling a familiar embedded problem: too many devices, buses and software stacks speaking incompatible dialects. The platform is positioned as thin middleware plus tooling for protocol mapping, message routing and automated code generation, so teams can connect CAN, Modbus, I2C, SPI, RS485, Ethernet and higher-level interfaces without rewriting glue code every time a signal layout changes. In practice, the value is less about “moving data” in the abstract and more about preserving engineering time for product logic, analytics and HMI work. https://telecanesis.com/

What stands out in this demo is the workflow refinement inside the web-based Hub. Codecs are becoming system-wide rather than tied to a single capsule, which makes reuse much cleaner across a blueprint. The new imports flow also looks more practical for DBC-driven design: engineers can ingest a file once, label it, selectively pull only the required messages into each capsule, and later re-import changed definitions instead of rebuilding the whole route map. That is a meaningful shift for teams dealing with evolving vehicle, battery or industrial bus definitions over time.

The use case described here is a good fit for battery systems, domain controllers and other heterogeneous embedded environments where one internal data model has to feed cloud services, databases, HMIs and mobile apps in different formats. Rather than expose every raw signal upstream, TeleCANesis lets developers normalize data internally and publish only the subset that matters to customers or backend services. Filmed at Embedded World 2026 in Nuremberg, the demo also hints at where the product is moving next, with broader plug-in support, updated ingestion in the coming 1.1 release, and recent additions such as CANopen and serial connector plug-ins.

There is also a practical deployment story behind it. The runtime is presented as largely platform-agnostic, with only a thin OS and compiler abstraction layer needing adaptation, which makes ports to new ARM or MCU targets much faster than a typical middleware stack. The company points to support around QNX, Raspberry Pi 4 and 5, Yocto Scarthgap, and integration paths toward HMI frameworks such as Qt, Slint, GL Studio and Unity. That combination makes the tool relevant not only for automotive-style gateways but also for industrial control, robotics and connected equipment.

The AI angle is still early, but the direction makes sense: use AI to inspect an existing project, identify protocols and messages, and pre-build the TeleCANesis blueprint so engineers start from a working draft instead of a blank canvas. For teams building software-defined machines, cloud-connected controllers or AI-assisted products, that could make TeleCANesis a useful bridge between fieldbus data, application logic and agent workflows. The core idea is straightforward: stop hand-coding translation layers every time the system grows, and treat connectivity as a configurable part of the architecture instead of a recurring rewrite.

All my Embedded World videos are in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

source https://www.youtube.com/watch?v=MvX0zdWJ0fY

Makat AI Electronics Procurement, BoM Analysis, Real-Time Pricing, Component Sourcing

Posted by – March 14, 2026
Category: Exclusive videos

Makat is pitching a more data-driven version of open-market component buying: instead of opaque broker calls and manual quote chasing, the platform is built around real-time pricing, availability checks, supplier scoring, and transaction workflows that let a buyer move from BoM analysis to PO placement inside one digital flow. The company frames this as AI-powered independent distribution for OEMs and CMs, with emphasis on shortage management, cost reduction, excess inventory handling, and transparent markup rather than black-box brokering. https://www.makat.ai/

What stands out in this interview is the attempt to turn tactical procurement into something more strategic. The demo revolves around board-level electronics sourcing, where Makat says it can highlight risk, identify alternate distributors, benchmark pricing across multiple supply channels, and show where a customer may be overpaying or exposed to supply disruption. That matters in electronics manufacturing, where line stoppages, allocation pressure, NCNR exposure, and fragmented broker networks still make spot buys expensive and slow to execute.

The AI angle here is not presented as a generic chatbot layer, but as a sourcing and procurement engine: benchmarking supplier quotes, ranking vendors, analyzing stock positions, and automating parts of supplier communication and decision support. In practice, that places the platform somewhere between electronics distribution, supply-chain intelligence, and procurement workflow automation. The interesting claim is not only visibility, but transactability: Makat says it acts as vendor of record, taking ownership of sourcing, logistics, and delivery rather than only recommending where to buy.

Filmed at Embedded World 2026 in Nuremberg, the conversation shows how much the electronics supply chain is shifting toward digital procurement infrastructure. Makat’s message is that the future of component sourcing is less about informal broker relationships and more about comparison analytics, supplier data, workflow automation, and accountable execution. For manufacturers dealing with shortages, alternates, price volatility, and multi-distributor sourcing, that is a relevant change in how component purchasing gets done today.

All my Embedded World videos are in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

source https://www.youtube.com/watch?v=SncbMKIVCtA

Edge Impulse Intelligent Factory at Embedded World 2026: Edge AI, YOLO-Pro, Digital Twin, Local LLM

Posted by – March 14, 2026
Category: Exclusive videos

Edge Impulse frames this demo around a practical factory problem: too many data streams, too little time to turn them into action. The setup combines multi-line visual inspection, model inference, and operator-facing summaries into one edge pipeline, with object detection separating good parts from faulty ones and feeding decisions such as rework, scrap, or continued flow. The point is not AI as a cloud dashboard, but AI as a control layer sitting close to the machine. https://www.edgeimpulse.com/

What stands out is the way several workloads run side by side: four simulated production lines, defect detection, a digital-twin view of the floor, and a local language model interface for querying what is happening in real time. That makes the demo less about a single neural network and more about orchestration across computer vision, telemetry, and human-machine interaction, where latency and determinism matter more than headline model size.

The industrial case is clear. In manufacturing, stoppages are expensive, and even a small delay in inspection or triage can ripple through yield, throughput, and maintenance planning. Running inference on the edge helps keep response times predictable, keeps proprietary production data on premises, and avoids depending on a round trip to the cloud for every decision. That is especially relevant for defect detection, anomaly screening, and line monitoring where reliability has to be built into the stack.

Filmed at Embedded World 2026 in Nuremberg, the demo also shows how edge AI is moving beyond isolated vision nodes toward richer factory software. Edge Impulse positions its YOLO-Pro workflow around embedded industrial vision, while the local LLM layer points to a new operator model where staff can query live plant data in plain language instead of navigating separate dashboards. The result is a compact view of where industrial edge systems are headed: vision, digital twin, and natural-language analytics running together on site.

source https://www.youtube.com/watch?v=Aun0kQt-hH8

Grinn Edge AI SOMs with GenioSOM-360, AstraSOM-261x and ReneSOM-V2H at Embedded World

Posted by – March 14, 2026
Category: Exclusive videos

Grinn presents itself here less as a single-board vendor and more as a rapid productization partner for embedded AI. The core idea is consistent across the booth: take a complex SoC, turn it into a compact system-on-module, add the carrier design and software stack around it, and let customers focus on the actual device instead of rebuilding the low-level platform from zero. That comes through in the PCB inspection robot, the camera modules, and the industrial carrier boards shown in the demo. https://grinn-global.com/

The strongest thread in the video is practical edge vision. One demo uses robot vision and onboard AI to monitor PCB production, while another shows real-time hand-gesture tracking aimed at robotics and human-machine interaction. Rather than presenting AI as a cloud service, Grinn is framing it as local inference on embedded Linux hardware, where latency, power budget, camera input, and I/O integration matter as much as raw TOPS.

The hardware story is also broader than one chipset family. The booth includes a MediaTek-based GenioSOM platform, a Synaptics SL2610 based module shown in camera and industrial formats, and a newly announced GenioSOM-360 positioned as an extremely small module for edge AI designs. That makes the video relevant for developers looking at SOM-based designs for industrial vision, smart cameras, robotics, compact HMI devices, and other products where Ethernet, HDMI, MIPI camera interfaces, and software portability all have to come together on a tight schedule.

Another useful angle is how Grinn uses partner booths to validate its role in the ecosystem. The company’s modules and demos are spread across Synaptics, MediaTek, Würth Elektronik, RS and other stands, which says something important: Grinn is not only shipping modules, but also helping silicon vendors and distributors show real deployable use cases. Filmed at Embedded World 2026 in Nuremberg, the interview captures that middle layer of the embedded market where reference design, carrier integration, BSP work, and fast customization often decide whether an AI concept becomes a shipping product.

Overall, this is a good snapshot of where embedded AI is heading in 2026: smaller SOMs, stronger local vision processing, faster path from evaluation kit to product, and more emphasis on software support alongside hardware. The interesting part is not just the silicon names, but the integration model behind them. Grinn is showing how MediaTek, Synaptics and Renesas class processors can be turned into compact, application-ready platforms for machine vision, gesture recognition, industrial inspection and robotics at the edge today.

source https://www.youtube.com/watch?v=SRkLbeRIfzo

RECOM Low-Voltage High-Current Power Modules from 25A for AI, FPGA, DDR to 150A Multiphase Rails

Posted by – March 13, 2026
Category: Exclusive videos

RECOM is expanding its board-level power portfolio with compact point-of-load modules aimed at the hardest rail in modern digital design: very low voltage at very high current. The discussion centers on new 15A and 25A modules for power-tree design, covering rails for processor cores, DDR and dense digital logic, with output targets down to 0.35V and 0.5V depending on the part. That fills a gap between intermediate bus conversion and the final high-current core rail, where size, efficiency and layout matter most. https://recom-power.com/

The key theme here is what happens when SoCs, FPGAs and AI accelerators keep adding compute density while core voltages keep dropping. Lower voltage helps switching speed, but it pushes current sharply upward, so the power stage has to deliver tens or even hundreds of amps in a very small footprint. RECOM positions these modules as scalable building blocks: 25A per unit, 50A with two devices, and up to 150A through multiphase paralleling, aimed at robotics, machine vision, automotive compute and other embedded platforms with fast load steps.

A major technical point in the interview is transient response. Modern processors can jump from sleep to full activity extremely fast, so the regulator has to react before the rail drifts out of tolerance. RECOM’s adaptive constant-on-time control is presented as a way to respond faster than a conventional clock-cycle-limited loop, while also allowing lower output capacitance. That matters because less capacitance can reduce board area, BOM cost and stored energy on the rail, all while keeping the supply stable during aggressive current swings.

Another important layer is programmability. With PMBus telemetry and control, the module is not just a fixed converter but part of the system architecture. Output voltage can be trimmed very accurately, operating behavior can be tuned for different modes, and voltage margining can match the needs of individual processors characterized at the factory. In practice, that means the rail can be optimized for performance, efficiency and reliability instead of treating power as a static afterthought. The video was filmed at Embedded World 2026 in Nuremberg, where this kind of low-voltage, high-current power delivery is becoming central to embedded AI and high-density compute.

The broader context also matters. RECOM highlights a portfolio that runs from tiny isolated converters to high-power systems, and its latest public messaging around embedded world 2026 also points to discrete power IC and transformer options alongside PoL modules. That makes this launch interesting not just as one new regulator, but as part of a wider push toward configurable, modular power design. For engineers working on next-generation FPGA, SoC and edge AI hardware, the real takeaway is simple: power delivery is now an active design domain, with telemetry, programmability, interleaving, EMI behavior and transient control all shaping what the processor can actually do.

RECOM High-Current PoL Modules, PMBus Control, for FPGA and SoC

RECOM PMBus Power Delivery for SoC and FPGA, 0.35V Rails and 25A PoL Modules

source https://www.youtube.com/watch?v=L91dBTq3rK8

RECOM 65W GaN AC/DC, 1200W Fanless PMBus PSU, 2U DIN Rail Power

Posted by – March 13, 2026
Category: Exclusive videos

RECOM is showing how far compact AC/DC design has moved when mechanical compatibility stays fixed but output power climbs sharply. The headline part here is the new 65W PCB-mount AC/DC family, presented in the same footprint and pinout as an earlier 30W generation, so designers can scale power without rerouting the board or redesigning the front end. The move to GaN switching is central: faster switching, higher efficiency, smaller magnetics and better power density all show up directly in the module size, transformer reduction and lower material use. https://recom-power.com/

What makes that interesting is not only density, but migration path. A pin-compatible upgrade from lower power to 65W is useful for products that start with one load profile and later need more headroom, whether that is for industrial control, embedded compute, test equipment or medical electronics. The open-frame variant shown in the interview pushes the same platform into chassis-mount use, with integrated surge handling and common-mode filtering aimed at installations where grounding, EMI and earth-loop behavior matter more than in a floating-output board design.

The bigger power story is the fanless 1200W class. RECOM’s RACM1200-V platform is built around baseplate cooling, up to 1000W continuous fanless output with 1200W boost, PMBus visibility, and digital control for monitoring, fault handling and application-specific behavior. That makes it relevant for medical, industrial and automation systems where acoustics, reliability and service life often matter more than adding a fan. The interview also touches on firmware tuning, power limiting and protection strategy, which is increasingly where power supplies become part of the system architecture rather than just a power brick.

Another practical angle is cabinet density. RECOM’s newer ultra-slim DIN-rail family uses a 2U step-shape format for 30W, 60W and 90W versions, keeping the same width while pushing higher output into flat distribution panels and home or building automation cabinets. The 90W version is especially notable because RECOM positions it against wider conventional alternatives, with high efficiency, push-in terminals, audible-noise suppression and tighter panel utilization. Filmed at Embedded World 2026 in Nuremberg, the discussion ties together GaN, thermal design, EMC filtering, PMBus telemetry and mechanical standardization in a way that feels very relevant to current embedded power design.

Overall, this is less about one isolated launch and more about RECOM’s broader direction: higher power density where GaN makes sense, digital control at higher wattage, and space-efficient AC/DC form factors for embedded and automation installs. The useful takeaway is that smaller magnetics, slimmer DIN-rail geometry, conduction-cooled kilowatt supplies and drop-in board upgrades are all converging toward the same goal: more power in less volume, with fewer compromises in certification, thermal behavior and integration effort.

source https://www.youtube.com/watch?v=-hISqLa3kmg

Thistle Technologies Edge AI Security, Secure Boot, OTA Updates, Model Signing

Posted by – March 13, 2026
Category: Exclusive videos

Thistle Technologies is tackling a familiar embedded problem: the industry knows what strong security should look like, but secure boot, signed firmware, encrypted updates, hardware root of trust integration, and key handling still take too much board-specific work for most teams. This interview explains how Thistle is trying to compress that effort from months into hours by giving device makers one platform for secure boot enablement, OTA orchestration, firmware signing, release control, and now protected Edge AI model deployment. https://thistle.tech/product

A key point here is that AI models on embedded devices now need the same trust chain as firmware. Thistle’s approach is to sign, encrypt, version, and verify models back to hardware so the device can confirm it is running the intended model rather than an injected or tampered payload. That matters for Edge AI pipelines where models change frequently, but provenance, integrity, and anti-extraction controls have to stay intact across deployment and update cycles. Embedded Computing Design’s 2026 Best in Show coverage frames this as hardware-anchored trust, model signing, provenance tracking, and protected delivery for Edge AI systems.

The demos make that concrete across very different hardware classes: small MCU-scale targets, Linux systems, Qualcomm platforms, MediaTek designs, and boards using Infineon OPTIGA Trust M. What stands out is the unified control plane: one backend for secure OTA, encrypted firmware bundles, model rollout, and version management across heterogeneous fleets. Thistle’s own product material also highlights CI/CD-oriented release tooling and Cloud KMS-backed signing flows, which fits well with what is shown in the interview about practical key management instead of passing secrets around on laptops or USB sticks.

Another layer in the discussion is regulation. The video was filmed at Embedded World 2026 in Nuremberg, where security and lifecycle maintenance were major themes, and Thistle explicitly connects its stack to Europe’s Cyber Resilience Act. That alignment makes sense: CRA preparation is pushing manufacturers toward secure-by-design architectures, authenticated updates, vulnerability handling, and long-term maintenance for connected products. In that context, the value here is not a vague “security platform” pitch but a workflow that ties silicon security features, software release discipline, and field update reliability into one operational path.

The most interesting part of the conversation is also the most realistic one: nobody claims 100% security. Instead, the argument is that embedded systems controlling physical processes, infrastructure, robotics, and safety-relevant equipment can no longer accept weak boot chains, ad hoc signing, or unsecured model refresh. For teams shipping connected products with Edge AI, this is really about reducing attack surface while keeping deployment practical: secure boot, encrypted OTA, hardware-backed key custody, model verification, and fleet-wide update management brought into a single repeatable flow.

source https://www.youtube.com/watch?v=dbkKcFbHaOw

RECOM discrete DC/DC solutions, isolated power ICs and SMD transformers explained

Posted by – March 13, 2026
Category: Exclusive videos

RECOM is broadening its power portfolio beyond classic modules and into discrete isolated DC/DC building blocks, giving design teams a more flexible path from concept to production. The key idea in this interview is not just component availability, but a structured design flow built around matched power ICs, SMD transformers, and ready-made discrete reference solutions. Instead of forcing engineers to choose between a fully integrated module and a fully custom analog design from scratch, RECOM is positioning itself in the middle with pre-matched combinations that remove much of the uncertainty from isolated power design. https://recom-power.com/

What makes the concept interesting is the “your design, your choice” approach. An engineer can start with only the IC, select an IC plus a validated matching transformer, or order a complete discrete low-power isolated DC/DC implementation prepared by RECOM. That matters because transformer-driver matching is often where discrete converter design becomes slow and risky, especially when magnetics, topology, isolation constraints, and board-level integration all have to line up at once.

The technical focus is clearly on low-power isolated DC/DC conversion, where the interplay between the controller IC and the transformer largely defines whether the design behaves properly. RECOM highlights very small ICs, compact SMD transformers, and board-level discrete solutions that can be tested directly in an application. This gives developers a way to evaluate isolated converter behavior, tune system requirements, and decide whether a modular converter, a semi-custom discrete stage, or individual discrete parts is the better fit for cost, layout, and product differentiation.

The main value proposition here is speed. RECOM says it can deliver a ready discrete solution within 20 days, which shifts the conversation from pure component sourcing to design acceleration and faster time to market. For embedded developers working on industrial, communications, automation, or edge electronics, that can be more important than squeezing out a marginal efficiency gain, because the real bottleneck is often engineering time, validation effort, and getting hardware into the field quickly. The video was filmed at Embedded World 2026 in Nuremberg, where this launch was presented as a bridge between RECOM’s established module business and a new discrete power strategy.

Overall, the story is about giving engineers more control without pushing all the risk back onto them. RECOM is using the know-how it built through years of DC/DC module design and exposing part of that expertise through matched IC-transformer pairs and pre-built discrete solutions. That turns isolated power from a slow, magnetics-heavy design exercise into something closer to a configurable platform, which is a notable shift for teams that need isolation, compact SMD implementation, and faster prototyping without abandoning the option of deeper customization later on.

source https://www.youtube.com/watch?v=f6SsrygbdEk

Renesas RH850/U2B at Embedded World 2026, Motor Control, FFT, Zonal Controller

Posted by – March 13, 2026
Category: Exclusive videos

Renesas is showing a very practical side of the RH850/U2B here: how an automotive MCU can tackle a noisy BLDC motor with visible torque ripple, vibration, and cogging, then smooth it out with a dedicated compensation algorithm. Instead of framing motor control as an abstract benchmark, this demo makes the effect easy to hear, feel, and measure through the FFT view and the before/after response of the system. https://www.renesas.com/en/products/rh850-u2b

The key technical point is hardware offload. In this setup, the compensation workload runs on the RH850/U2B embedded hardware accelerator rather than relying only on the main CPU cores, which cuts the control cycle time from roughly 15.4 microseconds to about 5 microseconds. That kind of latency reduction matters in inverter and motor-control loops because it improves response, reduces ripple, and helps push precision further at low speed where cogging effects are easy to notice.

What makes the demo more relevant than a simple motor-control board is where Renesas positions the device. RH850/U2B is part of its cross-domain automotive MCU family, aimed at zonal controllers and unified ECU designs where motor control, safety, security, and real-time processing increasingly need to coexist on one device. The discussion around ASIL certification, EVITA Full capability, multi-core processing, and lockstep support places this clearly in the context of modern vehicle E/E architecture rather than a standalone industrial drive demo.

Filmed at Embedded World 2026 in Nuremberg, the demo is a good example of how Renesas is linking motor-control quality to broader automotive compute trends: hardware acceleration, deterministic timing, functional safety, cybersecurity, and domain integration. The result shown here is simple but meaningful: lower acoustic noise, lower vibration, faster execution, and a more efficient control path for EV, HEV, actuator, and zonal automotive applications.

source https://www.youtube.com/watch?v=7-LnA57KlGo

Yocto Project at Embedded World 2026: LTS, SBOM, BitBake, RISC-V, Embedded Linux

Posted by – March 13, 2026
Category: Exclusive videos

This conversation frames Yocto less as a single distro and more as the infrastructure layer many embedded Linux teams eventually need once products move beyond quick demos. The interview highlights why developers keep coming back to it: reproducible builds, minimal images, board bring-up, source mirroring, A/B update workflows, and a build system that only pulls in what the target actually needs. That matters for performance, maintenance, and attack surface, especially when long-lived devices are deployed in volume. https://www.yoctoproject.org/

A big theme here is maintainability over time. The speakers point to the next Yocto LTS cycle, with four years of support, as a practical answer for product teams facing long qualification windows and regulatory pressure. Security is presented in a very concrete way: SBOM generation, vulnerability scanning, CVE tracking, and the ability to rebuild images quickly when fixes land. That makes Yocto relevant not just for BSP work and image creation, but for Cyber Resilience Act readiness and ongoing fleet maintenance in the field.

What also comes through is how much of Yocto’s value sits in BitBake and the surrounding workflow rather than in any single package set. The discussion around bitbake-setup, shared sstate cache, layer configuration, and reusable board support shows why experienced engineers see it as a build framework rather than just another embedded Linux option. First builds may take time, but incremental rebuilds, cache reuse across projects, and structured metadata make the system much more scalable once teams are juggling multiple products, branches, and hardware targets at once.

The interview also gives a useful view of Yocto’s hardware reach. ARM is treated as routine, cross-compilation is normal, and RISC-V now feels more strategic than experimental, with community layers, board support, and stronger testing infrastructure getting more attention. There is also an interesting hint that Yocto thinking may spread beyond classic embedded targets, especially through meta-virtualization, container image construction, multi-architecture builds, and ultra-small deployable runtimes where provenance and SBOM detail matter a lot.

Just as important, this is a story about community process. The speakers are candid about what works well and what still needs refinement, from mailing-list driven contribution flow to newer GitHub-style expectations, and from volunteer patch flow to paid maintainers, release management, and LTS coordination funded by members. Filmed at Embedded World 2026 in Nuremberg, the video ends up showing Yocto as a mature, open, vendor-neutral build ecosystem for embedded Linux, where security, reproducibility, board enablement, and long-term support are all tied together in one stack.

source https://www.youtube.com/watch?v=YPjoayYbosQ

Renesas RZ/V2H and RZ/V2N Robotics Demo, Gesture AI, Voice Control, ROS 2

Posted by – March 12, 2026
Category: Exclusive videos

Renesas uses this demo to show how edge AI is moving from simple vision classification into closed-loop robot control. The first setup combines an off-the-shelf dexterous hand with an RZ/V2H board, where a camera tracks human hand gestures, runs local inference, and maps the result to motors and axes so the robot hand mirrors the operator in real time. It is a practical example of embedded vision, gesture recognition, motor control, and low-latency human-machine interaction coming together on one platform. https://www.renesas.com/en

What makes the RZ/V2H part interesting here is not just raw AI throughput, but the system balance behind it. Renesas positions it for robotics and vision AI with multicore processing, DRP-AI acceleration, image-processing capability, and support for multiple camera streams, which fits workloads such as hand tracking, perception fusion, and coordinated motion. In this context the demo is less about a robotic hand alone and more about how sensor input, inference, and actuator control can be collapsed into a compact edge robotics design.

The second demo shifts toward collaborative robotics and tool assistance. Here, a robotic arm based on the RZ/V2N platform accepts both voice commands and hand gestures, running in a ROS 2 architecture to identify a requested tool, move to the right position, and present it to the operator. That makes the story broader than vision AI: it becomes a multimodal interface problem involving speech, gesture, robot middleware, task flow, and safe human-robot collaboration on the edge.

MXT’s role adds another useful layer, because this is not only a silicon story but also an ecosystem story. As a Renesas preferred partner, MXT has worked with Renesas across modules, evaluation kits, and custom boards, and the board shown here is described as a Raspberry Pi form factor design that can work with existing expansion hardware. That matters for faster prototyping, easier integration, and lower friction when developers want to move from proof of concept to a more product-like robotics platform.

Seen from Embedded World 2026 in Nuremberg, these demos reflect where industrial and service robotics are heading: more cameras, more AI models, more joints, more natural interfaces, and tighter integration between Linux, ROS 2, vision pipelines, and motor control. The most useful takeaway is not hype around humanoids, but the way Renesas is stacking practical building blocks for gesture-controlled manipulators, voice-driven cobots, and embedded robot perception where latency, power, and system cost still matter.

source https://www.youtube.com/watch?v=-9ba3hnz_ek

Renesas Robotics Sensor Tech at Embedded World 2026, Edge AI, Force Sensing, Predictive Maintenance

Posted by – March 12, 2026
Category: Exclusive videos

Renesas frames this demo around sensing as a core building block for edge AI, robotics, mobility, and industrial automation. The focus is not on one isolated component but on how force sensing, position sensing, impedance sensing, and low-footprint embedded intelligence can be combined into compact actuator and HMI designs that are precise, robust, and realistic to scale in production. https://www.renesas.com/IPS

The robotic hand is a good example of that direction. Instead of simple fingertip touch, the demo shows full-finger force measurement, so grip strength and the force curve over time can be tracked as the grasp develops. That matters for dexterous manipulation, safe human-robot interaction, and more natural motion control, where the system must regulate pressure finely enough to hold fragile objects without instability or slip.

A second theme is robotic joint feedback. Renesas positions inductive, magnet-free sensing as a practical fit for humanoid and industrial robot joints because it can deliver absolute position information, high resolution, immunity to stray magnetic fields, and better robustness against moisture, vibration, dust, and electromagnetic disturbance. That lines up with the company’s newer inductive position sensor push, including parts such as the RAA2P3226 for robotic joints, where compact integration, low latency, and tight angular accuracy are critical for servo control and coordinated motion.

The mobility demo extends that sensing approach into the human-machine interface. The scooter handle detects whether both hands are present using impedance sensing rather than conventional capacitive touch, which improves operation with gloves and in humid or wet conditions. Renesas is also emphasizing more complete reference algorithms around these sensors, so OEMs can tune sensitivity and recognition behavior in software without starting from scratch, which is often what product teams need when time-to-design is tight.

The final part of the video is about edge intelligence in a more literal sense: sensor data processed locally on a modest 32-bit microcontroller to infer things that are not directly measured, such as leakage, friction, or load change for predictive maintenance. That is a useful distinction in industrial sensing because it keeps latency, memory demand, power budget, and system cost under control while still enabling condition monitoring. Filmed at Embedded World 2026 in Nuremberg, the demo shows Renesas pushing sensors beyond raw measurement toward embedded perception for robotics, micromobility, and Industry 4.0.

source https://www.youtube.com/watch?v=qjhmr43MScA

Lantronix Open-M 720G/520G drone AI compute, thermal imaging and Pixhawk integration

Posted by – March 12, 2026
Category: Exclusive videos

Lantronix is showing how a compact edge-AI compute module can turn a drone platform into something closer to an OEM-ready reference design than a simple demo. The focus here is the new Open-M 720G and 520G system-on-modules based on MediaTek Genio 720 and 520, aimed at getting UAV developers from evaluation to flight tests quickly with onboard vision, control and sensor integration in one low-power stack. https://www.lantronix.com/products/open-m-720g-520g-som-system-on-module/

What makes this interesting is not just the module itself, but the system architecture around it. In the demo, Lantronix ties the SOM into a FLIR thermal camera path and a Pixhawk flight controller, creating a practical platform for inspection, surveillance and infrastructure monitoring. That matters because drone makers often need a starting point that already solves camera I/O, flight-control interfacing and edge inference, so they can spend more time on mission logic, autonomy and payload design.

Technically, the Genio 720 and 520 class stands out for delivering up to 10 TOPS of AI performance in a very constrained power envelope. Lantronix positions the platform at roughly 4 to 10 watts for typical usage, which is a meaningful number in UAV design where propulsion already dominates the energy budget. The point is not raw benchmark leadership, but usable on-device AI without the thermal and battery penalties that come with moving to 20, 30 or 40 watt compute tiers. For drones, that tradeoff can decide whether a mission lasts close to an hour or drops toward the 20 to 30 minute range.

The 720G and 520G mainly separate on imaging capability rather than core AI class, with the 720G supporting more camera processing through a dual-ISP style configuration while the 520G fits simpler single-ISP designs. That makes the pair relevant for manufacturers building regional alternatives to DJI-style platforms, especially where thermal imaging, multi-camera sensing, operator-assisted autonomy and fleet workflows matter more than consumer drone features. Filmed at Embedded World 2026 in Nuremberg, this interview is really about edge compute efficiency, modular drone design and how low-power AI silicon is becoming a practical foundation for industrial UAVs.

source https://www.youtube.com/watch?v=BBdLp7FBkd4

Innocomm MediaTek Genio 360P Multi-Camera Edge AI, DMS and Gesture Recognition

Posted by – March 12, 2026
Category: Exclusive videos

Innocomm presents a practical edge AI vision platform built around the MediaTek Genio 360P and Genio 360, showing how a system integrator and module maker can turn a reference SoC into a deployable multi-camera product. The demo is less about a single benchmark and more about system balance: camera input, AI pipeline scheduling, thermal behavior, and a usable module strategy for OEM and embedded designs. https://www.innocomm.com/

What stands out in this setup is concurrent inference across four camera streams with six computer-vision workloads running on one device. The applications mentioned in the demo cover driver monitoring, face detection and face matching, pose estimation, fall detection for elderly-care scenarios, gesture recognition, object detection, and missing-item or left-behind-belonging detection. That makes the platform relevant for smart mobility, public-space analytics, safety systems, and AIoT endpoints where several perception tasks need to run in parallel rather than one at a time.

The technical story is also about resource management. On screen, the demo exposes frame rate, compute loading, and temperature while models are enabled or disabled, showing how performance can be redistributed dynamically across workloads. That matters in real deployments, because edge AI products live or die by sustained throughput, memory bandwidth, and thermal envelope, not just peak TOPS figures. Around the Genio 360 family, MediaTek is positioning a 6nm edge AI platform with a hexa-core CPU architecture and integrated NPU capability, while Innocomm extends that into modules and standard products that also span MediaTek Genio 720 and 520 options for broader design scaling.

Rather than presenting AI as a vague feature, this video shows a fairly concrete embedded vision stack: multi-camera input, real-time inference, modular hardware, and deployable use cases with clear commercial logic. Filmed at Embedded World 2026 in Nuremberg, it gives a good look at how MediaTek ecosystem partners such as Innocomm are packaging edge perception into evaluation kits and modules that can move from demo to product with relatively little architectural change.

source https://www.youtube.com/watch?v=Zt8BUChd38E

Linaro CoreCollective at Embedded World 2026, ONEBoot, AMI Meridian, Yocto, Arm firmware lifecycle

Posted by – March 12, 2026
Category: Exclusive videos

Linaro’s demo focuses on something that usually stays invisible until it breaks: firmware lifecycle management on Arm devices. The discussion here is about making BIOS and boot firmware less of a one-time “flash and forget” step and more of a maintained software layer, with repeatable build, test, verification, SBOM tracking, vulnerability management, and long-term updates for devices running either Linux or Windows on Arm. https://www.linaro.org/

A key point is the split between ACPI-based firmware for Windows on Arm and Device Tree based firmware for Linux, and how Linaro and AMI are trying to manage both from one workflow. The demo combines AMI Meridian, Aptio V UEFI, and Linaro ONEBoot on the same ADLINK OSM-IMX93 platform, showing how a single board can boot Windows 11 IoT or Yocto Linux while keeping the firmware path standardized, security-aware, and easier to maintain over time.

That matters because firmware sits below the operating system and carries higher privilege than user space or even the kernel. If the firmware layer is weak, OS hardening only goes so far. The interview makes that practical: CVE monitoring, SBOM generation, software supply chain visibility, and CRA-oriented compliance are no longer just enterprise server topics, but increasingly part of embedded and IoT product maintenance. This video was filmed at Embedded World 2026 in Nuremberg, where that regulatory angle is clearly shaping how vendors present embedded platforms.

The other thread in the video is Linaro’s broader services model around Arm software enablement. Beyond firmware, the booth also covers Yocto build analysis, license and IP compliance, upstream kernel support, virtualization with virtio, and practical pathways for keeping deployed products supportable in the field. The newly launched CoreCollective also comes up as a free-to-join industry forum backed by Arm, intended to gather OEMs, ODMs, silicon vendors, and software stakeholders around shared engineering problems rather than isolated one-off fixes.

The final section on training is also worth noting because it connects theory to real hardware. Linaro is rebuilding its training offering around firmware, TF-A, U-Boot, Linux kernel, and Yocto, with remote lab access through its automation appliance, serial console, remote power control, OTG boot, and camera-monitored boards. That makes the pitch broader than a firmware demo alone: standardized boot flows, upstream-first engineering, CRA readiness, and hands-on enablement for teams building Arm products that need to stay secure and maintainable after shipment.

Linaro Unified Firmware Lifecycle, ONEBoot, AMI Meridian, Windows and Linux on Arm
Linaro ONEBoot and , SBOM, CVE and CRA compliance

source https://www.youtube.com/watch?v=aRIs9YZfkH0

Forlinx Edge AI on i.MX 95 and Ara240, RK3588 Multi-Camera Vision, Modular SoMs

Posted by – March 12, 2026
Category: Exclusive videos

Forlinx presents itself here as more than a module vendor. The interview is really about how an embedded hardware company is moving up the stack into edge AI integration, combining SoM design, carrier boards, manufacturing, software enablement, model conversion, and deployment support. The main message is that Forlinx wants to shorten the path from silicon vendor roadmap to a production-ready embedded AI platform, whether the target is industrial vision, smart gateways, robotics, or local multimodal inference. https://www.forlinx.net/

The headline demo pairs an NXP i.MX 95 platform with the Ara240 M.2 AI accelerator, creating a hybrid edge AI system that mixes the i.MX 95’s local vision, graphics, security and low-power processing with an external 40 eTOPS accelerator for larger models. In the discussion, that translates into local image understanding and natural-language analysis without relying on cloud inference, including a 7B-class LLM workflow and token generation around 20 tokens per second. That combination is interesting because it shows a practical split between on-chip NPU inference and a higher-throughput PCIe add-in path for generative AI at the edge.

A second thread in the video is platform scaling. Forlinx talks about using the i.MX 95’s own NPU for front-end recognition and then handing richer tasks to the accelerator, while also pointing to multi-card configurations for larger parameter counts. That makes the story less about one benchmark and more about architecture: modular edge AI, where compute can be right-sized from compact fanless designs up to multi-accelerator systems, depending on camera count, model size, latency target, and power budget.

The Rockchip side of the booth broadens that picture. RK3588 appears as a mature edge vision platform handling multi-camera workloads, PoE-connected inference pipelines, stitching, and video-centric AI optimization across encode, decode, and NPU execution. There is also a smaller RV1126B face-tracking demo showing how low-power Cortex-A53 class systems with an integrated NPU can still deliver responsive, fanless vision tasks. What stands out is not just chip support, but the engineering work behind BSP tuning, driver maturity, model adaptation, and layer-level optimization for real deployments.

Later in the video, the discussion shifts to pin-compatible module design, ODM work, early access to new SoCs, Linux support, and close collaboration with NXP, Rockchip, TI and Allwinner. That makes this less of a product showcase and more of a view into how embedded AI is being industrialized: standardised compute building blocks, faster bring-up, tighter software-hardware co-design, and a clearer route from demo to mass production. The video was filmed at Embedded World 2026 in Nuremberg, where Forlinx framed edge AI as a system integration problem as much as a silicon one.

Forlinx i.MX 95, Ara240 and RK3588 Edge AI for Vision and Local LLMs

Forlinx Embedded AI Platforms with i.MX 95, Ara240, RK3588 and

source https://www.youtube.com/watch?v=W6M4m0LBciw

Renesas 365 Launched at Embedded World 2026: MCU selection, BSP scaffolding, fleet management

Posted by – March 12, 2026
Category: Exclusive videos

Renesas 365 is presented here as a cloud-native engineering platform that tries to connect system architecture, embedded software, PCB design, and operational lifecycle management inside one continuous workflow. The core idea is not just collaboration in a browser, but persistent digital context: design intent, interface requirements, device choices, and implementation details stay linked instead of being scattered across diagrams, spreadsheets, datasheets, and isolated toolchains. That makes the discussion less about a single MCU and more about how a smart connected product is specified, built, updated, and maintained across its full life cycle. https://www.renesas.com/renesas365

The balancing-robot demo makes that concept concrete. Renesas shows how a product can begin as a system-level model, where interfaces between controller, sensors, connectivity, and peripherals become machine-readable constraints rather than static drawing objects. In the demo, Electronic System Design captures those constraints and feeds them into RA Explorer, which evaluates the RA MCU family at scale, including peripheral allocation, channel mapping, and pin multiplexing. Instead of manually checking hundreds of parts and reconciling conflicts one by one, the platform narrows the candidate list in seconds and regenerates a valid configuration when requirements change, such as adding CAN.

What stands out technically is the handoff from system model to software scaffolding. Once the device configuration is resolved, Renesas 365 can generate the basis of a board support package and assemble the low-level driver stack around the selected peripherals, including connectivity layers such as Wi-Fi. That is the real productivity claim here: not only component discovery, but carrying configuration intent downstream into embedded implementation. For MCU teams dealing with pinmux limits, package variants, and software-stack assembly, that removes a large amount of repetitive engineering work and shifts attention toward architecture, trade-off analysis, and application behavior at the edge.

The wider roadmap matters just as much as the live demo. Renesas has been positioning Renesas 365, powered by Altium, as a full electronics-system platform spanning silicon, discover, develop, lifecycle, and software, with broader lifecycle services around digital traceability, secure OTA/OTAA infrastructure, and fleet-oriented management. In the interview, that future direction also extends toward behavioral modeling, power and memory budgeting, AI-assisted code generation, debugging, and API-level access for external tools and autonomous agents. Filmed at Embedded World 2026 in Nuremberg, the conversation frames the launch as part of a larger shift from isolated EDA and firmware workflows toward a more platform-based electronics-development stack.

Another important point is openness. Renesas is clearly strongest when modeling its own silicon, but the demo also shows third-party components in the design flow, and the company describes a roadmap where partners can publish hardware, software, and subsystem models into the environment. That makes Renesas 365 less about locking engineers into a single vendor bill of materials and more about giving mixed-vendor embedded teams a shared design surface with traceable context. For anyone building software-defined industrial or IoT products, the interesting question is not whether this replaces every existing tool on day one, but how far it can reduce manual integration friction between architecture, firmware, board design, update infrastructure, and fleet operation at scale.

source https://www.youtube.com/watch?v=62XKBA4x7ts

Looking Glass musubi holographic photo frame converts photos & videos to HLD holograms Kickstarter

Posted by – March 11, 2026
Category: Exclusive videos

musubi is a new holographic photo and video frame developed by Looking Glass that converts ordinary photos and short video clips into holograms with visible depth. The device is designed as a simple consumer product that works with media people already have, including photos from phones or older scanned pictures. Conversion happens locally through a desktop application that reconstructs depth using machine learning and loads the hologram directly onto the frame. The device does not require a cloud connection or subscription and stores the media locally on the device. https://look.glass/musubi

The idea behind musubi is to make holographic displays practical for everyday use at home. Many people store thousands of photos and videos that are rarely revisited once they disappear into phone galleries or cloud folders. By transforming those flat images into holographic scenes with depth, the frame attempts to recreate moments with more spatial presence than traditional digital photo frames. Weddings, family memories, pets, and travel clips can be converted into short holographic scenes that play directly on the display.

The workflow is intentionally simple. Users connect the frame to a Mac or PC using USB-C, select photos or short video clips up to thirty seconds long, and run the conversion tool in the Looking Glass desktop software. The application generates a 3D scene from the original media and loads it into the device storage. Each frame can hold around one thousand holograms and includes a built-in speaker for video playback, allowing clips to run with sound.

The hardware includes a 7-inch Hololuminescent Display with roughly two inches of perceived depth. The frame has an internal rechargeable battery rated for about three hours of operation or can run continuously when powered through USB-C. All playback works offline once the media has been converted and loaded. The device includes simple controls for power, volume, and switching between stored holograms.

For creators and developers there are additional tools available beyond the standard workflow, including support for Gaussian splat imports as well as plugins for Unity, Unreal Engine, and Blender. Motion graphics templates for Adobe Premiere Pro and After Effects can also generate compatible holographic content. This demonstration was filmed at Embedded World 2026 in Nuremberg where Looking Glass presented musubi as a smaller consumer counterpart to its larger holographic displays used in developer and enterprise environments.

Looking Glass musubi holographic photo frame demo HLD display converts photos and videos
Looking Glass musubi holographic frame turns photos and videos into 3D holograms
Looking Glass musubi holographic display photo and video frame Kickstarter demo

source https://www.youtube.com/watch?v=3_ZKcVEi5Yk

Siemens industrial AI hub Booth Tour at SPS 2025 digital twin, copilots and agentic robots

Posted by – March 2, 2026
Category: Exclusive videos

Siemens uses this booth tour to show how its industrial AI strategy connects automation hardware, engineering software and domain-specific copilots into one digital enterprise stack. From the central Industrial AI Hub, Tsvetelina Nikolova explains how manufacturers can merge real-world production assets with a comprehensive digital twin, then run “what-if” scenarios across the entire lifecycle to optimize design, throughput and energy use. The focus is on leveraging Siemens Xcelerator, Industrial Operations X and Industrial Edge to turn heterogeneous shop-floor data into a consistent, AI-ready data fabric that spans OT and IT. https://www.siemens.com/global/en/products/automation/topic-areas/industrial-ai.html


HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.

On the design and engineering side, the tour highlights generative AI embedded directly into NX and the new family of Industrial Copilots. Here, engineers can ask natural-language questions about CAD models, get design variants for components like TV wall mounts, or have NX CAM Copilot propose optimized toolpaths for complex parts. The Engineering Copilot TIA, tightly integrated with TIA Portal, lets automation engineers describe intents instead of writing or searching through PLC code, automating configuration tasks and documentation across projects. ([Siemens Press][3]) This reduces repetitive work, accelerates commissioning and makes it easier for new engineers to contribute quickly to established control architectures, improving productivity across the engineering workflow.

In operations, the video zooms in on Insights Hub, Siemens’ industrial IoT platform that aggregates sensor, PLC and MES data and exposes it through dashboards and a built-in copilot. Operators can use conversational queries to check stock levels in the MES, configure machines for short product runs with multiple variants, and orchestrate workflows textually rather than through custom scripts. The same data backbone feeds asset intelligence and predictive maintenance, illustrated by a BlueScope steel case where a digital twin “fingerprint” of critical assets is compared continuously with live data to detect deviations and trigger proactive interventions, avoiding roughly 2,000 hours of unplanned downtime. Together, these examples show how industrial AI copilots move from nice-to-have dashboards to closed-loop decision support that protects throughput and uptime.

The second half of the tour steps into Siemens’ “future” zone, where agentic AI and autonomous production concepts are on display. A robot cell is configured as an example of how autonomous agents could handle configuration, scheduling and execution of tasks, while an orchestrator agent coordinates specialized agents for planning, quality, logistics and energy optimization. Rather than replacing humans, Siemens positions these agentic systems as collaborators that take over low-level reconfiguration work so engineers and operators can focus on high-value problem solving, governance and safety. This aligns with Siemens’ broader push toward industrial foundation models and AI agents that can reason over engineering data, shop-floor events and business constraints across the wider industry.

Filmed in the Siemens hall at SPS 2025 in Nuremberg, the video also touches on how the company extends this experience beyond the physical stand through live talks and a persistent virtual booth. Nikolova stresses that AI-driven factories are still built around human decision-makers, with copilots and agents acting as transparent, explainable tools rather than opaque black boxes. For younger engineers, that means fewer hours on translation, documentation and repetitive configuration, and more time on creative tasks like new machine concepts or process improvements. The result is a glimpse of how industrial AI, digital twins and autonomous agents may reshape factory work over the coming years, while keeping human expertise firmly at the center of the experience online.

source https://www.youtube.com/watch?v=FpkAXAdHaEI