Advantech Robotics Demo: RK3588 AMR controller, Intel Core Ultra, GMSL vision

Posted by – March 21, 2026
Category: Exclusive videos

Advantech is showing how an AMR compute stack comes together around multi-camera perception, edge AI and integration support rather than just raw processor specs. The demo centers on the AFRS-761, a Rockchip RK3588-based controller connected to four GMSL cameras, building a 360-degree view for person detection, obstacle awareness and autonomous navigation in warehouse or factory robots. The point is clear: perception is the front end of robot intelligence, and the computer has to ingest, synchronize and process several camera streams in real time. https://www.advantech.com/

What makes this interesting is the emphasis on practical sensor integration. In the demo, GMSL is presented as a preferred interface for current robotics deployments because it simplifies multi-camera wiring across mobile platforms, while Advantech also supports other camera options including MIPI-CSI. That matters for AMRs, forklifts and mobile service robots where reliability, cabling distance, ruggedness and low-latency video all affect how safely a machine can move through a busy environment.

The broader message is that robotics perception is no longer a single-board story. Advantech positions the compute module as the robot brain, the cameras as the eyes, and the motion stack as the actuators behind wheels, arms or other mechanisms. Alongside the Arm platform, the company also highlights an Intel Core Ultra based AMR controller, showing that robotics developers increasingly want a choice of CPU and AI architectures depending on power budget, software stack and workload mix, from object detection to depth processing and scene understanding.

Software is a big part of the pitch as well. Advantech’s robotics approach combines hardware with integration work, driver support, ROS 2 oriented tooling and partner software for fleet management, navigation and deployment. In this setup, Node Robotics provides the higher-level AMR software visible in the demo, while Advantech focuses on making sensor and compute combinations easier to bring into real projects. That is often the hard part in robotics: not proving a concept once, but making perception pipelines stable enough for deployment.

Filmed at Embedded World 2026 in Nuremberg, this interview gives a useful snapshot of where industrial robotics is heading: closer coupling between cameras and edge compute, more multi-sensor perception at the vehicle level, and more modular ecosystems for AMRs, AGVs and warehouse automation. The small robot on the booth is the simple visual example, but the real topic is scalable perception architecture for robots that need to see people, avoid obstacles and keep moving reliably in dynamic spaces.

All my Embedded World videos are in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

source https://www.youtube.com/watch?v=V0vkzAmaiSA

Würth Elektronik at Embedded World 2026 Power Modules, Wireless Power, AEC-Q200 Inductors and more

Posted by – March 21, 2026
Category: Exclusive videos

Würth Elektronik gives a broad view of how a passive-component supplier moves up the stack into practical power design. The interview centers on compact DC/DC power modules that integrate the inductor, capacitors and key support circuitry, so engineers can build a regulated supply with minimal external parts and much less layout effort. That makes the story less about single components and more about power architecture, EMI behavior, thermal paths and time-to-design in embedded hardware. https://www.we-online.com/

The most interesting angle is how the portfolio connects discrete magnetics, capacitors, quartz and oscillators with module-level building blocks such as the MagI3C family. In real designs, that means one vendor can cover timing, filtering, isolation, galvanic separation and point-of-load conversion across industrial and compute boards. The video also touches wireless power, where Würth Elektronik’s coil and transformer know-how feeds transmitter and receiver designs similar to split-transformer architectures used in inductive charging, from consumer devices up to higher-power transfer.

Automotive qualification is another key theme. The company highlights parts built for stricter reliability targets, including AEC-Q200 qualified components for harsher electrical and thermal environments. That matters in body electronics, infotainment, motor control and power conversion, where low loss, stable magnetic behavior and controlled EMC can matter as much as raw current rating. The discussion around molded inductors is especially relevant here, because shielded constructions help reduce stray magnetic fields and support cleaner high-efficiency converter layouts.

Seen in the context of Embedded World 2026 in Nuremberg, the demo is really about breadth: passive components, power modules, optoelectronics, LED control, wireless power and application examples with partners such as STMicroelectronics and Analog Devices. The closing focus on efficiency and thermal management is the right one, because embedded systems now span everything from nanoamp energy-harvesting nodes to high-current rails for GaN-based power stages, and both ends of that range depend on better magnetics, lower losses and tighter integration today.

All my Embedded World videos are in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

source https://www.youtube.com/watch?v=wBCMA45Vd3I

Advantech Multi-OS on Arm: Ubuntu Pro on NXP i.MX 8M, Qualcomm YOLOv8 Edge AI

Posted by – March 20, 2026
Category: Exclusive videos

I used current Advantech material on Ubuntu Pro for Devices, its Canonical collaboration, and its Qualcomm edge AI module lineup to tighten the wording and add relevant technical context around OS support, lifecycle, and edge inference. Advantech says Ubuntu Pro for Devices brings 10 years of LTS support, expanded security maintenance, and management tooling, while its Qualcomm QCS6490-based AOM-2721 platform supports Yocto, Windows, and Ubuntu across edge AI scenarios.

Advantech is showing how software support can be just as important as silicon in modern Arm-based edge systems. One part of the demo focuses on Ubuntu Pro running on NXP i.MX 8M, aimed at developers who want a more complete Linux environment for industrial IoT, gateways, robotics and embedded AI without spending time rebuilding drivers, kernel support and interface validation from scratch. The point is not just that Ubuntu boots on Arm, but that the platform is prepared for deployment with long-term maintenance, security updates and a usable BSP from day one. https://www.advantech.com/

The discussion also highlights why this matters for real products. On embedded platforms, OS readiness, driver coverage, graphics support, wireless connectivity and patch management often decide how quickly a team can move from evaluation to shipping hardware. Here the value proposition is a development-ready stack around NXP and Canonical, where Ubuntu Pro adds 10-year lifecycle support, expanded CVE maintenance and large-scale device management options that fit industrial environments better than a minimal custom Linux image.

The second demo moves to Qualcomm and a more explicitly AI-focused workflow. Advantech shows a small OSM-based edge platform running live object detection with YOLOv8, using the SoC’s heterogeneous compute resources rather than pushing everything onto the CPU. That is the real multi-OS story in this video: Yocto, Ubuntu and Windows support on Arm platforms where CPU, GPU and dedicated AI acceleration can be balanced depending on latency, power budget, camera pipeline and application needs.

What makes the conversation interesting is the practical emphasis on optimization. The interview keeps coming back to a familiar edge AI issue: strong hardware alone does not guarantee good results if the software stack is not tuned to the accelerator, memory bandwidth and available drivers. Advantech positions itself as the layer between silicon vendors and product teams, helping customers understand whether a workload belongs on CPU, GPU or NPU, and what software dependencies come with that decision.

This makes the video less about one benchmark and more about reducing engineering friction in embedded AI. The combination of long-term OS support on NXP, multi-OS enablement on Qualcomm, containerized AI workflows and board-level software integration reflects where many Arm deployments are heading now. Filmed at Embedded World 2026 in Nuremberg, it captures a shift from raw edge AI hardware announcements toward the harder question of how to make these platforms maintainable, secure and actually usable in production.

All my Embedded World videos are in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

source https://www.youtube.com/watch?v=FWlD2-CJulU

Octavo Systems SiP with AM62, STM32MP2, ROS 2, Edge AI, ADAS and Guitar Audio DSP

Posted by – March 20, 2026
Category: Exclusive videos

Octavo Systems focuses on System-in-Package design: taking a microprocessor, DDR memory, power management and key passives, then collapsing them into a molded BGA that removes much of the hardest board-level integration. In this interview, that idea is shown not as an abstract packaging story but as a practical way to reduce layout risk, shrink PCB area, and accelerate bring-up for embedded Linux, edge AI, industrial control and audio products. https://octavosystems.com/

The most memorable demo is a Chaos Audio multi-effects guitar pedal, where the SiP handles the real-time audio DSP while a phone or tablet acts mainly as the control surface over Bluetooth. The point is not just miniaturization; it is deterministic local processing with effectively no audible latency, which is exactly what musicians need when switching tones, stacking effects, or building a digital pedalboard that still feels immediate under the fingers.

From there the discussion moves to Octavo’s newer processor-module direction, including the TI AM62-based OSD62PM and the STM32MP2-based OSD32MP2-PM reference platform. The pitch is very specific: processor plus DDR4 in a package roughly the size of the DRAM footprint, with major savings in area and routing complexity compared with a discrete MPU-and-memory design. Camera interfaces, DSI, LVDS, PCIe, Ethernet and built-in AI capability make these parts relevant for HMI, vision, smart gateways, robotics and compact edge compute gear.

What makes the booth tour useful is the range of deployed examples. Octavo shows SiP designs inside ROS 2 robotics modules, a retail people counter, a programmable smart torque drill for manufacturing, a compact SOM, an AMD Zynq UltraScale+ MPSoC platform for ADAS-style video inference, and an industrial automation controller with RS485, CAN and cloud connectivity. That broad spread makes the technology easier to understand: SiP is not a single market play, but a packaging and productization strategy that fits many embedded workloads.

A recurring theme is that SiP is not only about size. Octavo argues that pre-validating the processor-to-memory subsystem removes non-differentiating engineering work, reduces design spins, and in some cases can even compete on BOM cost when compared with sourcing the processor and DRAM separately. Filmed at Embedded World 2026 in Nuremberg, this is a grounded look at how integration, thermals, Linux-class processing and edge AI are being pushed into much smaller hardware footprints without turning every product into a custom high-risk board design.

All my Embedded World videos are in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

source https://www.youtube.com/watch?v=C9RZC3o2RHE

HS Devices Atronx microQ7 COM, Octavo SiP, Linux web terminal and modular I/O

Posted by – March 20, 2026
Category: Exclusive videos

HS Devices presents Atronx as a compact embedded development platform built around a microQ7 computer-on-module and an Octavo System-in-Package approach, aimed at teams that need a ready hardware base for custom products rather than a finished end device. The pitch is clear: shorten hardware bring-up, keep software portable, and give engineers a practical Linux-based platform they can adapt for industrial control, monitoring, and edge-connected systems. https://www.hsdevices.com/

What stands out in the demo is the browser-based terminal and device management layer. Instead of treating the module as a black box, HS Devices shows telemetry, module details, CPU load, memory usage, temperature, and direct command-line access in one web interface. That makes the board feel less like a static eval kit and more like a remotely manageable embedded node, which is useful for prototyping field devices, service access, scripted deployment, and debugging across distributed installations.

The hardware story is about modularity and reuse. Atronx is positioned so developers can keep the same carrier or development board and swap compute modules depending on the target: more multithreaded Linux performance, lower power operation, or stronger real-time behavior. That kind of separation between carrier design and compute module is valuable in embedded product design because it reduces redesign cycles, preserves I/O investment, and lets teams move faster when requirements change late in development.

There is also an interesting small-company angle here. HS Devices is a startup from Niš, Serbia, focused on PCB design, circuit design, and embedded hardware engineering, and this product reflects that mindset: practical board-level integration, standardized software foundations, and attention to communication interfaces. Filmed at Embedded World 2026 in Nuremberg, the interview shows a company trying to turn board design expertise into a flexible embedded platform that engineers can actually build on, not just evaluate.

source https://www.youtube.com/watch?v=SIz9Ln5vz68

RISC-V CEO Interview on ISO Standardization, AI Matrix Extensions, Automotive and Edge Compute

Posted by – March 20, 2026
Category: Exclusive videos

RISC-V CEO Andrea Gallo presents here not as a single chip vendor story, but as the governance layer behind an open instruction set architecture that lets semiconductor firms, IP providers, tool vendors and device makers build to the same ISA while keeping their own implementation details proprietary. The interview explains why that distinction matters: RISC-V is an open standard, not an open-source core, so the shared asset is the specification itself and the portability it enables across software stacks, supply chains and long product cycles. https://riscv.org/

A central theme is standardization. Andrea Gallo frames the 2025 milestone of RISC-V International becoming an ISO/IEC JTC 1 PAS submitter as more than a badge: it is a path toward formal international recognition for the ISA, which can matter in procurement, compliance, functional safety and regulated industrial design. That is especially relevant as RISC-V moves further from microcontrollers into application processors, automotive platforms, security architectures and compute infrastructure where interoperability and long-term governance carry real weight.

The conversation also gets into how technical consensus is built without turning the ISA into bloat. New extensions are expected to solve real multi-company problems, not one-off requests, and that discipline is what keeps the architecture coherent while still expanding into vectors, matrix processing and AI-friendly data handling. The software point is critical: vendors may differentiate in silicon, microarchitecture and performance, but developers still want one PyTorch, one TensorFlow backend, one toolchain target and a stable compliance model rather than fragmented ports.

Another useful insight is how RISC-V is organizing itself around both horizontal technologies and industry verticals. Alongside the core technical groups, the ecosystem is pulling in requirements from automotive, safety, data center, space, intelligent edge and robotics so that recommendations can map real workloads to the right ISA profiles, extensions and software expectations. That makes the story less about ideology and more about practical system design: where the standard should stop, where vendors should compete, and how to keep portability from compiler to firmware to OS and AI runtime.

What comes through most clearly is that RISC-V is no longer just a university-origin ISA associated with embedded experimentation. It is becoming a neutral coordination point for global compute development, backed by formal process, public technical review and a growing base of engineers, researchers and students who are treating the architecture as production infrastructure. Filmed at Embedded World 2026 in Nuremberg, this interview captures that transition well: from open ISA theory to the harder work of profiles, extensions, safety, matrix acceleration, ecosystem alignment and real deployment at scale.

source https://www.youtube.com/watch?v=4IoVgheSB2o

Espressif ESP32-P4 Edge AI Robot Arm, ESP32-H4 LE Audio, ESP32-E22 Wi-Fi 6E

Posted by – March 20, 2026
Category: Exclusive videos

Espressif’s booth video is really about how far the ESP32 family has moved beyond basic IoT nodes into embedded vision, motion control, touch UI, wireless audio, and higher-bandwidth connectivity. The main demo centers on an ESP32-P4 robotic arm using on-device computer vision to detect colored blocks and trigger pick-and-place motion, which is a good fit for the P4’s dual-core RISC-V architecture, AI instruction extensions, MIPI camera/display support, hardware pixel processing, and H.264-capable multimedia pipeline. https://www.espressif.com/

What makes the robotic arm section interesting is that it combines local inference with networked control instead of treating edge AI and cloud AI as opposites. In the demo, OpenCV-style vision runs directly on the chip for offline detection, while wireless connectivity is used for function-call style interaction and remote control. That fits Espressif’s broader direction for the P4 platform: richer HMI, camera-based edge computing, and low-cost embedded systems that can still expose modern interfaces and automation logic. The handheld controller also points to ESP-NOW as a practical low-latency device-to-device control layer for responsive robotics and peripherals.

The middle part of the video broadens that story with touch and audio demos rather than staying narrowly focused on robotics. The piano example shows how Espressif is positioning capacitive touch as a stable UI input method for compact devices, while the small talking character demo shifts attention to voice interaction, directional audio capture, and sensor-driven movement. That combination matters because Espressif is increasingly covering the full edge stack: sensing, local processing, audio I/O, display control, and wireless backhaul, all in platforms that stay closer to MCU economics than full application-processor designs.

Another useful part of the booth tour is the segmentation across chips. The ESP32-H4 appears in the BLE audio and touch-control demos, which lines up with its role as a low-power dual-core RISC-V SoC for Bluetooth 5.4 LE, IEEE 802.15.4, LE Audio, PAwR, direction finding, and battery-powered devices with an integrated DC-DC converter. The sensor shuttle concept then shows Espressif’s modular approach to quick prototyping, where IMU, magnetic, environmental, display, lighting, microphone, speaker, and battery functions can be mixed around a compact controller rather than rebuilt for each proof of concept.

Filmed at Embedded World 2026 in Nuremberg, the last stretch of the video gives a glimpse of where Espressif is expanding next: not just low-power 2.4 GHz IoT, but also stronger wireless transport. The ESP32-C5, which reached mass production in 2025, brings dual-band Wi-Fi 6 plus Bluetooth LE and 802.15.4, while the newer ESP32-E22 adds tri-band Wi-Fi 6E as a connectivity co-processor across 2.4, 5, and 6 GHz. Put together, the booth is less about a single hero demo and more about Espressif building a ladder from simple sensors to edge AI vision, LE Audio, robotics, and higher-throughput connected devices.

source https://www.youtube.com/watch?v=21dSHwdn7pQ

Espressif Booth Tour at Embedded World 2026 ESP32-P4 HMI, ESP32-C6 Low Power, ESP32-E22 Wi-Fi 6E

Posted by – March 19, 2026
Category: Exclusive videos

Espressif uses this demo to show how far its MCU roadmap has moved beyond classic sensor nodes and simple connectivity. The centerpiece is the ESP32-P4, a dual-core RISC-V MCU aimed at richer HMI, multimedia and lightweight edge vision, paired here with a Riverdi 12.1-inch 1280×800 high-brightness industrial touch display. What stands out is not raw headline performance alone, but the fact that this class of GUI can run in an MCU environment with ESP-IDF and LVGL rather than requiring a heavier application processor. https://www.espressif.com/en/products/socs/esp32-p4

The discussion makes clear that Espressif is positioning the P4 as a serious display and interface device: MIPI support, camera input, vector instructions, pixel-processing acceleration, and a software stack that stays accessible to embedded developers. That creates an interesting middle ground between traditional microcontrollers and Linux-class SoCs. For product teams building control panels, industrial terminals, smart appliances, medical interfaces or compact vision-enabled devices, that balance of cost, power envelope and graphics capability is likely the real point of interest.

Another theme is software portability and ecosystem depth. The demo moves between ESP-IDF, LVGL, Embedded Wizard and Slint, while also touching on Rust support and open-source inference examples. Espressif’s approach remains closely tied to accessible tooling, broad community adoption and low barrier to entry, which is one reason the ESP32 family continues to show up in both commercial products and fast prototyping. The partner angle with Riverdi also matters, because industrial display vendors can turn a reference platform into something closer to a deployable subassembly.

Power management is the other major thread. The ESP32-C6 demo highlights Espressif’s split between high-power and low-power cores, showing how software design affects current draw far more than many teams initially expect. That is especially relevant for battery devices, wireless panels and always-on IoT endpoints. Filmed at Embedded World 2026 in Nuremberg, the booth tour also gives a useful snapshot of how Espressif now spans makers, industrial users and HMI developers rather than sitting in only one of those camps.

The wider portfolio shown at the booth reinforces that trajectory. Alongside P4-based HMI and camera demos, Espressif points to C-series RISC-V parts tailored for different wireless and memory requirements, plus the newly announced ESP32-E22 as a tri-band Wi-Fi 6E connectivity co-processor covering 2.4, 5 and 6 GHz. Put together, the story here is about modular architecture: compute where you need it, radio where you need it, and a path from compact MCU designs to more display-heavy and connected embedded products without abandoning the familiar ESP development model.

source https://www.youtube.com/watch?v=uVr0JxrOTfU

N-iX Embedded Engineering, IoT Prototyping, Nordic Low-Power Devices & Robotics

Posted by – March 19, 2026
Category: Exclusive videos

This interview frames N-iX as a broad engineering partner rather than a narrow outsourcing vendor. The key point is its one-stop model: embedded software, hardware design, mechanical engineering, connectivity, cloud, and data work can be combined into a single product-development flow, which matters when companies need faster prototyping, tighter hardware-software integration, and fewer handoffs across suppliers. https://www.n-ix.com/

The embedded team describes the practical side of that model well. Instead of focusing only on firmware, they talk about building real devices end to end, including enclosures, electronics, and product-level design decisions. That makes this less about coding capacity and more about full-cycle embedded engineering, where board design, RTOS or Linux software, wireless connectivity, mechanical constraints, validation, and manufacturability all need to line up.

A useful detail in the conversation is how N-iX uses platforms such as Raspberry Pi and Arduino. These are presented mainly as prototyping tools, but also as fast paths for proof-of-concept work where teams need to validate sensing, control logic, motion, and obstacle avoidance before moving to a more production-oriented architecture. The robotic arm demo fits that pattern: rapid iteration around edge control, object handling, and system behavior, with the prototype acting as a bridge between concept and deployable product.

The mention of Nordic Semiconductor also points to a more specific technical direction: low-power connected devices. That usually means Bluetooth Low Energy, battery-optimized wearables, asset trackers, sensor nodes, and other designs where power budgeting, radio performance, firmware efficiency, and long maintenance cycles matter as much as raw compute. Seen that way, the video is really about how an engineering services company positions itself across the full embedded stack, from early prototype hardware to connected edge and IoT product development. The interview was filmed at Embedded World 2026 in Nuremberg.

source https://www.youtube.com/watch?v=aZFiVUrnBLw

CTRL+N Railway RTLS Wearables AI Multimeter Embedded Safety

Posted by – March 19, 2026
Category: Exclusive videos

CTRL+N presents itself here as a Serbian engineering startup building both hardware and software around embedded systems, with a clear focus on IoT, RTLS, wearables and AI-enabled digital platforms for industrial use. In this interview, the company frames its value around practical field devices rather than generic demos, showing how sensing, positioning and human-machine interaction can be combined into compact products for real deployments. https://ctrln.tech/

The strongest use case in the video is railway safety. CTRL+N shows a digital signalling and worker-safety concept built on embedded electronics, wireless connectivity and precise location awareness, so field personnel can be tracked relative to infrastructure and hazards. That points to a broader architecture built around RTLS, low-power radios such as Bluetooth Low Energy, edge sensing and alert logic, where worker position, status and alarm conditions can be fed into a supervision layer rather than handled as isolated devices.

The wearable element is especially relevant because it turns the system into something operational at track level. A wrist-worn or body-worn node that can vibrate, flash alarms and report location or basic vital-state data is a practical embedded design problem: power budget, ruggedization, wireless reliability, latency and usability all matter more than consumer-style features. In that sense, the video is less about a gadget and more about occupational safety infrastructure built from embedded hardware, firmware and connected software.

Another interesting detail is the AI-assisted multimeter concept. Instead of treating a measurement tool as a passive instrument, CTRL+N describes a compact tester with a chatbot-style interface that helps technicians investigate rail-track faults and interpret readings locally. That suggests a direction where field diagnostics blend measurement electronics, embedded UI, contextual guidance and AI support, giving junior engineers faster troubleshooting workflows while reducing dependence on constant access to senior staff. The interview was filmed at Embedded World 2026 in Nuremberg, where that mix of rail tech, wearables, RTLS and AI-backed maintenance made CTRL+N stand out as a systems-oriented engineering company rather than a single-product vendor.

source https://www.youtube.com/watch?v=16uPvs8tFE8

Altium Octopart Discover system design search, BOM sourcing, reference designs, CAD workflow

Posted by – March 19, 2026
Category: Exclusive videos

Altium positions Octopart Discover as a step beyond classic component lookup, turning Octopart from a parts search engine into a system-level discovery workflow. The core idea in this interview is persistent design intent: engineers can start with requirements, narrow options by context such as power, performance, lifecycle status or sourcing constraints, and carry those decisions through architecture, PCB design and procurement instead of losing that reasoning between tools. https://octopart.com/octopart-discover

What stands out is the shift from part-centric filtering to electronics system design. Rather than only searching for a specific IC or passive, the platform is shown handling reference designs, functional blocks, simulation assets, CAD data, lifecycle flags, alternates and distributor availability in one flow. That makes the tool relevant not just for component engineers but also for embedded software teams, hardware architects, sourcing specialists and manufacturing teams trying to converge earlier on a viable BOM.

The demo also suggests a more interactive reference design workflow. Users can inspect schematics and PCB context, view board layers and 3D geometry, drill into component properties, compare alternates, and preserve technical questions asked to field application engineers around a given design choice. That is important because many embedded projects fail less on raw part search than on handoff friction: why a device was chosen, whether it remains recommended for new design, and what constraints shaped the original decision.

On the Octopart side, the scale still matters. The demo references a component database in the tens of millions, live pricing and stock visibility, distributor and manufacturer normalization, and BOM-level purchasing flows that can move from architecture to preferred sourcing channels with fewer spreadsheet exports. For engineers dealing with second-source strategy, compliance, availability windows, regional supply conditions or cost-down work, that combination of technical metadata and sourcing context is where the platform becomes more than search.

Filmed at Embedded World 2026 in Nuremberg, this conversation is really about how EDA, supply chain data and early system architecture are starting to merge. Octopart Discover is presented not as a closed CAD feature but as an open, cross-ecosystem layer that can connect reference designs, component intelligence, distributor data and downstream implementation tools. If Altium executes on that open workflow, it could make early-stage embedded design more traceable, more procurement-aware and much faster to move from concept to production.

All my Embedded World videos are in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

source https://www.youtube.com/watch?v=SL3y-r2sSuM

SiliconAuto XMotiv M3, ZF Interface Chip, ADAS Pre-Processing, ADB Lighting and Digital Twin

Posted by – March 19, 2026
Category: Exclusive videos

SiliconAuto is positioning itself as a new automotive semiconductor player focused on the control layer that sits between high-level compute and real-time vehicle behavior. In this interview, the company frames its first XMotiv M3 microcontroller as part of a broader move toward automotive HPC, MCU and high-speed interconnect architectures built for low latency, functional safety and deterministic motion control rather than consumer-style compute alone. https://www.siliconautotech.com/

The technical story is really about partitioning. Instead of forcing a central SoC to absorb every sensor, control and housekeeping task, SiliconAuto argues for distributing work across a safety-oriented MCU and companion devices that handle timing-critical jobs closer to the edge. That matters in ADAS and automated driving, where sensor fusion, bounded latency, power limits and fail-operational behavior all shape the system architecture more than raw TOPS alone.

The XMotiv M3 itself is described as a TSMC 28 nm automotive MCU built around an Arm Cortex-M33 at 160 MHz, with TrustZone, HSM, random-number generation, CAN FD, SPI, I2C, UART and a large GPIO budget. In the demo, it drives an adaptive driving beam reference design with matrix LED control, regional dimming, steering-linked light shaping and welcome-animation features. The interesting angle is not just the headlamp demo, but the attempt to move premium lighting control, reference code and faster integration paths into more mainstream vehicle programs too.

A second thread in the video is SiliconAuto’s work with ZF on an award-winning I/O architecture shown at Embedded World 2026 in Nuremberg. Here the MCU acts as a safety and system-management companion for a chip handling camera and sensor pre-processing, image signal processing, radar-related data paths and AI-assisted detection. The broader implication is a chiplet-friendly automotive compute stack where OEMs can mix performance SoC, AI accelerator and I/O domains with more flexibility, while reducing CPU overhead, DDR traffic and sensor-interface bottlenecks.

The digital-twin demos push that idea further by showing software, AI inference benchmarking and even robotic-arm control before final silicon is available. That early virtual-platform workflow is increasingly relevant for automotive, robotics and drone development, where validation time, toolchain maturity and faster concept-to-production cycles can be just as important as the silicon itself. Overall, the video shows SiliconAuto less as a single-chip launch and more as an attempt to define a modular automotive compute model around safety MCU, sensor pre-processing, ADB lighting, UCIe-era integration and real-time motion.

All my Embedded World videos are in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

source https://www.youtube.com/watch?v=sxBNICzdrFo

MediaTek Booth Tour at Embedded World 2026: Genio Pro 5100, Genio 360, 420, 520, 720, Edge AI, OSM

Posted by – March 19, 2026
Category: Exclusive videos

MediaTek’s Embedded World 2026 booth tour centers on a broader edge AI compute stack for industrial and embedded systems, with the Genio family now spanning entry, mid-range, and higher-performance tiers. The key message is platform continuity: shared software direction, scalable AI acceleration, and pin-compatible options that let OEMs move between performance classes without redesigning everything from scratch. In practice, that matters for robotics, HMI, smart retail, machine vision, industrial IoT, and connected equipment that need on-device inference rather than cloud dependence. https://www.mediatek.com/products/iot/genio-iot

The newly discussed Genio 360 is positioned as a major refresh of the lower end of the lineup, replacing a much older class of part with a hexa-core 6nm design and up to 6 TOPS for edge inference. That is a meaningful jump for cost-sensitive devices that still need practical AI workloads such as object detection, pose estimation, gesture recognition, vision-based monitoring, and lightweight generative AI at the edge. Above that, the Genio 420 extends the range, while the previously introduced Genio 720 and 520 bring 10 TOPS on 6nm silicon with octa-core CPU configurations and support for LPDDR4 or LPDDR5 memory.

At the top of this discussion is the new Genio Pro tier, presented here as a 50 TOPS class platform aimed at heavier edge AI and robotics workloads. That shifts MediaTek’s embedded portfolio closer to use cases involving multimodal perception, larger vision models, more demanding transformer inference, autonomous mobile systems, and local LLM deployment in the 7B class and beyond, depending on model optimization, quantization, memory footprint, and thermal design. The emphasis is not only raw TOPS, but a combination of CPU headroom, multimedia capability, memory bandwidth, and developer readiness through early kits and partner designs.

The demo ecosystem in the booth shows how that silicon strategy turns into deployable products. MediaTek highlights partner SOMs and OSM modules, including compact designs from companies such as Mitwell, plus embedded boards built around parts like the Genio 1200 for display-heavy systems. One of the more concrete examples here is a 6DoF tracking setup for forklifts and mobile equipment, illustrating how edge AI, sensors, and embedded compute can be packaged into aftermarket or OEM industrial systems. Filmed at Embedded World 2026 in Nuremberg, the video gives a useful snapshot of where MediaTek is heading: from entry-level embedded AI up to high-throughput edge compute for robotics, vision, and industrial automation.

source https://www.youtube.com/watch?v=gtCAVdaedqI

Premio modular rugged edge AI computers, Jetson Orin, EDGEBoost, railway and vision systems

Posted by – March 17, 2026
Category: Exclusive videos

Premio’s latest platform story is really about rugged edge compute becoming more modular, more serviceable, and more AI-specific at the same time. The interview focuses on fanless industrial computers, panel PCs, and display systems designed for harsh deployments where vibration tolerance, thermal design, and lifecycle flexibility matter as much as raw performance. A central theme is Premio’s EDGEBoost architecture, which lets users configure I/O, storage, networking, and acceleration around a standardized core rather than forcing a fixed box into every deployment. https://premioinc.com/

That modular approach shows up in several places: M12 connectivity, dual 10GbE, PoE, out-of-band remote management, lockable storage, safe-eject logging features, and expansion paths for NVMe and GPU resources. The pitch is not just customization for its own sake, but faster deployment in industrial environments where requirements vary between vehicle systems, rail, machine vision, data logging, and field-installed automation. Premio also ties this to IEC 62443-4-1 processes, which matters for customers now treating cybersecurity and maintainability as part of the hardware spec rather than an afterthought.

The strongest technical segment is around rugged AI computers based on NVIDIA Jetson, especially Jetson AGX Orin and Orin-class systems for robotics, surveillance, ADAS, and anomaly detection. The transcript highlights GMSL camera support for low-latency long-cable video links in trucks and rail, plus IP66 designs for condensation-prone deployments. That combination of sealed enclosure design, fanless thermal engineering, and transport-focused compliance such as EN50155 and E-Mark is what makes these systems relevant beyond the demo table and into real railway and in-vehicle edge AI rollouts.

Another useful angle is Premio’s view of the “physical AI” compute ladder. At the low end, x86 platforms with integrated NPUs handle compact fanless inference. Moving up, M.2 AI accelerator cards add higher channel density for vision workloads without the power and size penalty of multiple discrete GPUs. Then Jetson Orin and larger GPU-based x86 systems take over for vision-language models, multimodal inference, and on-prem industrial AI where bandwidth, privacy, and latency make cloud-first architectures less practical. Filmed at Embedded World 2026 in Nuremberg, the interview reflects a market that is clearly shifting from simple object detection toward local VLM, SLM, and multimodal edge deployments.

The smart terminal and OEM/ODM sections complete the picture. Premio is not only selling rugged boxes, but also modular display systems where damaged front-end panels can be replaced without scrapping the compute backend, which is a practical design choice for glove-heavy industrial use. Combined with board-level customization, waterproof housings, and tailored I/O, the company is positioning itself as a hardware partner for system integrators building industrial 5.0, smart city, inspection, surveillance, and robotics platforms where reliability, thermal validation, and configurability all have to coexist.

source https://www.youtube.com/watch?v=rEmbvGBsz4I

Innatera Pulsar neuromorphic MCU, SNN edge AI, radar presence sensing and audio classification

Posted by – March 17, 2026
Category: Exclusive videos

Innatera is positioning neuromorphic computing as a practical way to run always-on sensor AI without the usual power penalty. In this interview, the company explains how its Pulsar chip combines spiking neural networks, a RISC-V microcontroller, and a CNN accelerator in a single sensor-edge device, so pattern recognition can happen continuously where data is created rather than being pushed to a larger processor or the cloud. That makes the discussion less about raw TOPS marketing and more about system-level efficiency, latency, and battery life. https://innatera.com/pulsar

The key idea is that Pulsar uses silicon neurons and synapses across digital and analog spiking fabric to process sensory events in a brain-inspired way. Instead of treating AI as a separate block bolted onto a conventional embedded design, Innatera presents neuromorphic inference as part of the whole SoC architecture. The result is a platform aimed at sub-millisecond reaction time, low data movement, and ultra-low-power operation for audio, radar, vibration, and other continuous sensor streams at the edge.

What makes the video interesting is that the story quickly moves from architecture to concrete product categories. The live demos include real-time audio classification, audio scene recognition for adaptive headphones, radar-based human presence detection, and predictive maintenance based on vibration sensing. These are all workloads where conventional embedded AI often struggles with the tradeoff between accuracy and always-on operation. Innatera’s claim is that spiking neural networks can keep sensing active full time while staying inside the power budget of compact battery-powered devices.

There is also a strong ambient intelligence theme running through the interview. A notable example is the radar-based human presence detector developed with Socionext, targeting extremely low-power detection for devices such as smart doorbells. Another is the intelligent smoke detector described here, which adds classification and occupancy awareness rather than acting as a simple threshold alarm. Filmed at Embedded World 2026 in Nuremberg, the demo set gives a useful snapshot of where neuromorphic edge AI is heading: not as a research novelty, but as embedded silicon for smart home, industrial IoT, wearables, and safety systems alike.

The company background matters too. Innatera spun out of Delft University of Technology in 2018 after years of research into brain-inspired and energy-efficient computing, and the interview frames Pulsar as the point where that research becomes production silicon. That matters because the value proposition is not generic AI acceleration, but embedded pattern recognition that can stay on continuously in the field. For engineers building sensor-rich products, this is really a discussion about edge inference architecture, mixed-signal design, SNN deployment, and how to reduce power, latency, and bandwidth all at the same time.

source https://www.youtube.com/watch?v=jAM-sgLlmrg

Bosch Rexroth ctrlX OS on AMD: Secure Industrial Control, Soft PLC, Node-RED, Edge AI

Posted by – March 17, 2026
Category: Exclusive videos

Bosch Rexroth is positioning ctrlX OS as a hardware-independent industrial Linux platform for software-defined automation, where the same application stack can move across controllers, IPCs, edge systems and virtual environments. In this interview, the focus is on secure industrial control, app-based deployment, and a common runtime that lets developers build once and roll out across multiple device classes with far less integration work. https://www.ctrlx-os.com/

The demo shows how ctrlX OS can host different control approaches on the same data layer, from a soft PLC to Node-RED, while exposing machine states and digital I/O through a unified interface. That matters because industrial edge systems increasingly mix classic control logic, visualization, protocol handling, and data services on one platform rather than splitting them across isolated boxes.

A key theme here is the broader hardware reach created by Bosch Rexroth’s work with AMD. The transcript points to support for CPU, GPU and MPU resources, which fits the current push toward x86 embedded processors and adaptive SoC platforms for edge compute. For developers building process-hungry workloads, that opens the door to more demanding HMI, analytics and edge AI pipelines without changing the operating-system layer or rewriting the deployment model.

Security and lifecycle management are just as central as performance. ctrlX OS is presented here as CRA-ready and aligned with IEC 62443-4-2 Security Level 2 expectations, while also giving access to the practical features engineers actually need in the field: backup and restore, reset, license management, app installation, and centralized access to every exposed data point. The result is less about a single controller and more about a secure, manageable OT software platform.

What makes the story interesting is the developer angle. Bosch Rexroth is clearly pushing an API-driven model where the same functions available in the web UI can also be automated through REST APIs, virtual controllers, SDK tooling, and reusable apps. Filmed at Embedded World 2026 in Nuremberg, this interview captures a broader transition in industrial automation: PLC logic, low-code tools, edge AI acceleration, and secure app deployment are starting to converge into one programmable software stack.

source https://www.youtube.com/watch?v=zIA8jK-tkFE

Tianma display roadmap: glass-free 3D, Mini-LED, transparent Micro-LED and HUD

Posted by – March 17, 2026
Category: Exclusive videos

Tianma’s display portfolio here is less about a single panel and more about how the company is packaging complete HMI platforms for industrial, medical, transport and automotive use. The interview moves from a 23.8-inch 4K2K industrial display to integrated systems where Tianma supplies not just the LCD or OLED, but also electronics, compute boards and enclosure design. That matters for OEMs building camera monitors, control terminals or specialized vision devices, because the value shifts from raw panel supply to full module integration, long-life support and design-in flexibility. https://www.tianma.eu/

A big theme in the booth tour is optical engineering for difficult environments. Tianma shows glass-free 3D with eye tracking, allowing a split between 2D UI and 3D visualization, which fits medical imaging and other workflows where depth cues matter but operators still need conventional data overlays. Mini-LED backlighting with local dimming is another clear focus, improving black levels and contrast for medical and inflight display use, while reflective display technology targets outdoor readability with far lower power draw than a conventional transmissive panel.

The industrial side is paired with application-specific hardware concepts, including a rugged professional tablet style monitor for camera and vision systems. What stands out is the combination of Tianma’s core display technologies with embedded electronics, suggesting a path from display component to near-finished device. The transcript also points to Rockchip-based electronics in the demo hardware, which reinforces the idea that Tianma is not just talking about panel specs, but about complete embedded display subsystems tuned for field use, sunlight readability and power efficiency.

On the automotive side, the most interesting pieces are transparent Micro-LED, long-shape Micro-LED formats and a Micro-LED source for head-up display architecture. That lines up with Tianma’s broader recent push into automotive Micro-LED and HUD concepts, including very high brightness projection-oriented displays and transparent surfaces that can turn glass areas into information layers. In that context, the booth demo feels like an extension of a wider strategy around smart cockpit display architecture, where LTPS LCD, AMOLED and Micro-LED each serve different HMI roles rather than competing as one universal technology.

Later in the video, filmed at Embedded World 2026 in Nuremberg, the broader message becomes clear: Tianma is positioning itself as a global display engineering partner with in-house coverage across TFT-LCD, LTPS, AMOLED, Mini-LED and Micro-LED, backed by manufacturing scale in Asia and regional support for European customers. The result is a story about display roadmaps, integration capability and application fit, from smartphones to digital signage to transportation and automotive cockpits, rather than a simple product launch.

source https://www.youtube.com/watch?v=r51NNAA56PY

Axelera Metis 214 TOPS and Europa Edge AI 629 TOPS: 8K Vision, RISC-V, Robotics, SLM, PCIe/M.2

Posted by – March 17, 2026
Category: Exclusive videos

Axelera positions itself as a European edge AI alternative focused on inference rather than training, and this interview makes that distinction clear. The main story is performance per watt: the company’s Metis platform is presented as delivering 214 TOPS at around 6W typical power, in compact M.2 and PCIe form factors that let developers add AI acceleration to existing x86 or Arm systems without redesigning the whole box. https://axelera.ai/

What stands out in the demo lineup is how practical the workloads are. Instead of benchmark theatre, the booth focuses on edge deployments such as native 8K video analytics, retail loss prevention, container inspection for rust and damage, and autonomous robotics. The point is not just raw throughput, but being able to process high resolution video streams and multiple models at the edge where thermal limits, latency, bandwidth, and total system cost matter more than in cloud-first AI.

The technical angle is also stronger than a typical trade-show pitch. Axelera describes Metis as combining digital in-memory computing for matrix-vector multiplication with a RISC-V based orchestration layer across four AI cores, which allows parallel or cascaded model execution. That architecture fits the current edge AI mix well: computer vision pipelines, multimodel workloads, and lighter generative AI tasks such as speech interfaces and small language models, rather than full-scale training or oversized server-class LLM deployments.

The roadmap matters just as much as the current chip. In the interview, Axelera points to Europa as the next step for premium edge systems, robotics, VLM-style contextual understanding, and larger language models beyond the current memory envelope. That lines up with the company’s broader push this year around Metis and Europa, its Voyager SDK toolchain, and ecosystem work that makes model conversion and deployment easier for developers moving from FP32 training environments to efficient edge inference.

Filmed at Embedded World 2026 in Nuremberg, this conversation shows why Axelera is getting attention in European semiconductor and edge AI circles: not because it claims to replace GPU training infrastructure, but because it targets the part of the stack where many industrial systems actually live. Low-power inference, compact accelerators, RISC-V control, DDR5-backed memory bandwidth, and deployable computer vision pipelines are the core themes here, with Europe’s supply-chain and sovereignty angle sitting in the background rather than dominating the pitch.

All my Embedded World videos are in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

source https://www.youtube.com/watch?v=iJrwV9zM53A

ProvenRun ProvenCore EAL7, Automotive Ethernet Protocol Break, Formal OS, ProvenHSM, STM32H5, PQC

Posted by – March 16, 2026
Category: Exclusive videos

ProvenRun is making a case for embedded security that starts below the application layer, with a mathematically verified trusted base rather than another add-on middleware stack. In this interview, the company explains how ProvenCore, its formally proven secure OS and TEE, is used to build high-assurance systems for automotive, avionics, defense, microcontrollers and cloud security, with the goal of reducing attack surface, simplifying certification and keeping long lifecycle products maintainable. https://provenrun.com/

A big part of the discussion is the shift to software-defined vehicles and zonal automotive Ethernet. ProvenRun’s protocol-break approach fully deconstructs and reconstructs traffic between exposed domains and safety-critical zones, rather than relying only on segmentation. That matters for in-vehicle infotainment, connectivity modems and ADAS paths, where 1GbE and faster links now carry far more critical traffic than older in-car networks ever did.

The technical differentiator is formal methods. ProvenRun says ProvenCore remains the only operating system certified at Common Criteria EAL7, and that foundation is then reused for trusted applications such as secure storage, cryptography, PKCS#11, VPN, network stacks, secure firmware update and protocol filtering. The company also highlights compatibility with standard embedded security ecosystems including GlobalPlatform, PSA-style APIs, Android trusted applications and post-quantum cryptography work with CryptoNext.

The interview also touches the microcontroller side, where ProvenCore-M is positioned as a secure RTOS and TEE for Arm v8-M class devices, including ST deployments around STM32 security architectures. That gives developers a pre-certified route to TrustZone-based isolation, secure services and easier product evaluation without having to design every security primitive from scratch. Filmed at Embedded World 2026 in Nuremberg, the demo shows how that same security-by-design philosophy is now being stretched from MCU roots into automotive gateways and trusted edge compute.

On the cloud side, ProvenRun is pushing ProvenHSM and ProvenBox as remotely manageable hardware-backed trust anchors for key management, crypto services and customizable secure applications. The interesting angle is not just HSM throughput, but compositional certification, cloud-native administration, FPGA-assisted crypto acceleration and a roadmap that includes PQC readiness. Overall, this is a useful look at how embedded cybersecurity is moving toward verifiable isolation, certifiable trusted execution and longer-term lifecycle assurance across both edge and data center scale.

All my Embedded World videos are in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

source https://www.youtube.com/watch?v=Cmz3ENmAYPs

eSOL eMCOS POSIX RTOS, ROS Middleware, Multicore ARM Cortex and RISC-V Embedded Full Stack

Posted by – March 16, 2026
Category: Exclusive videos

eSOL positions itself as a full-stack embedded software partner rather than a vendor selling only one RTOS layer. The core message in this interview is integration: a production-ready platform that combines the eMCOS real-time operating system, a POSIX-compliant profile, middleware for networking and robotics-oriented workflows, plus engineering services that extend from bring-up to certification. That matters for teams trying to reduce supplier fragmentation and keep one accountable path from hardware integration to deployed code. https://www.esol.com/

A key theme is the gap between prototype software and certifiable production systems. The demo points to ROS and model-based toolchains as part of the ecosystem, but the argument from eSOL is that open robotics frameworks alone are not always enough once determinism, safety, and real-time behavior become mandatory. In that context, eMCOS POSIX is presented as a way to preserve familiar POSIX development models while moving toward tighter scheduling control, certification targets, and system-level integration for embedded products.

What makes the platform interesting technically is scalability across compute classes. In the demo, the same runtime approach spans ARM Cortex-M, ARM Cortex-R, ARM Cortex-A and also RISC-V, reflecting eSOL’s long-standing focus on multi-core and many-core embedded architectures. That gives the interview a broader angle than a simple RTOS pitch: it is really about one software foundation that can move from small microcontrollers to larger heterogeneous SoCs without forcing a complete tooling reset or a redesign of the application stack at every step.

Recent eSOL direction adds useful context to what is shown here. The company has been expanding its Full Stack Engineering model in Europe, and its eMCOS POSIX profile gained ISO 26262 ASIL D compliance in 2025, which reinforces the interview’s emphasis on automotive-grade real-time software. eSOL has also been showing eMCOS in software-defined vehicle workflows, including virtual-platform work around Renesas R-Car, so the message here fits a wider industry push toward software-first development, safety partitioning, and faster validation at scale.

Overall, this is less about Linux replacement rhetoric and more about where a deterministic POSIX RTOS fits when embedded teams need predictable latency, certification support, multicore scaling, and one engineering interface across the stack. The interview was filmed at Embedded World 2026 in Nuremberg, and it frames eSOL as a company targeting automotive, robotics, industrial and medical designs where middleware compatibility, long-term support, and integration ownership are often worth as much as raw kernel features in practice here.

All my Embedded World videos are in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

source https://www.youtube.com/watch?v=iEaaI6PVweQ