RECOM Low-Voltage High-Current Power Modules from 25A for AI, FPGA, DDR to 150A Multiphase Rails

Posted by – March 13, 2026
Category: Exclusive videos

RECOM is expanding its board-level power portfolio with compact point-of-load modules aimed at the hardest rail in modern digital design: very low voltage at very high current. The discussion centers on new 15A and 25A modules for power-tree design, covering rails for processor cores, DDR and dense digital logic, with output targets down to 0.35V and 0.5V depending on the part. That fills a gap between intermediate bus conversion and the final high-current core rail, where size, efficiency and layout matter most. https://recom-power.com/

The key theme here is what happens when SoCs, FPGAs and AI accelerators keep adding compute density while core voltages keep dropping. Lower voltage helps switching speed, but it pushes current sharply upward, so the power stage has to deliver tens or even hundreds of amps in a very small footprint. RECOM positions these modules as scalable building blocks: 25A per unit, 50A with two devices, and up to 150A through multiphase paralleling, aimed at robotics, machine vision, automotive compute and other embedded platforms with fast load steps.

A major technical point in the interview is transient response. Modern processors can jump from sleep to full activity extremely fast, so the regulator has to react before the rail drifts out of tolerance. RECOM’s adaptive constant-on-time control is presented as a way to respond faster than a conventional clock-cycle-limited loop, while also allowing lower output capacitance. That matters because less capacitance can reduce board area, BOM cost and stored energy on the rail, all while keeping the supply stable during aggressive current swings.

Another important layer is programmability. With PMBus telemetry and control, the module is not just a fixed converter but part of the system architecture. Output voltage can be trimmed very accurately, operating behavior can be tuned for different modes, and voltage margining can match the needs of individual processors characterized at the factory. In practice, that means the rail can be optimized for performance, efficiency and reliability instead of treating power as a static afterthought. The video was filmed at Embedded World 2026 in Nuremberg, where this kind of low-voltage, high-current power delivery is becoming central to embedded AI and high-density compute.

The broader context also matters. RECOM highlights a portfolio that runs from tiny isolated converters to high-power systems, and its latest public messaging around embedded world 2026 also points to discrete power IC and transformer options alongside PoL modules. That makes this launch interesting not just as one new regulator, but as part of a wider push toward configurable, modular power design. For engineers working on next-generation FPGA, SoC and edge AI hardware, the real takeaway is simple: power delivery is now an active design domain, with telemetry, programmability, interleaving, EMI behavior and transient control all shaping what the processor can actually do.

RECOM High-Current PoL Modules, PMBus Control, for FPGA and SoC

RECOM PMBus Power Delivery for SoC and FPGA, 0.35V Rails and 25A PoL Modules

source https://www.youtube.com/watch?v=L91dBTq3rK8

RECOM 65W GaN AC/DC, 1200W Fanless PMBus PSU, 2U DIN Rail Power

Posted by – March 13, 2026
Category: Exclusive videos

RECOM is showing how far compact AC/DC design has moved when mechanical compatibility stays fixed but output power climbs sharply. The headline part here is the new 65W PCB-mount AC/DC family, presented in the same footprint and pinout as an earlier 30W generation, so designers can scale power without rerouting the board or redesigning the front end. The move to GaN switching is central: faster switching, higher efficiency, smaller magnetics and better power density all show up directly in the module size, transformer reduction and lower material use. https://recom-power.com/

What makes that interesting is not only density, but migration path. A pin-compatible upgrade from lower power to 65W is useful for products that start with one load profile and later need more headroom, whether that is for industrial control, embedded compute, test equipment or medical electronics. The open-frame variant shown in the interview pushes the same platform into chassis-mount use, with integrated surge handling and common-mode filtering aimed at installations where grounding, EMI and earth-loop behavior matter more than in a floating-output board design.

The bigger power story is the fanless 1200W class. RECOM’s RACM1200-V platform is built around baseplate cooling, up to 1000W continuous fanless output with 1200W boost, PMBus visibility, and digital control for monitoring, fault handling and application-specific behavior. That makes it relevant for medical, industrial and automation systems where acoustics, reliability and service life often matter more than adding a fan. The interview also touches on firmware tuning, power limiting and protection strategy, which is increasingly where power supplies become part of the system architecture rather than just a power brick.

Another practical angle is cabinet density. RECOM’s newer ultra-slim DIN-rail family uses a 2U step-shape format for 30W, 60W and 90W versions, keeping the same width while pushing higher output into flat distribution panels and home or building automation cabinets. The 90W version is especially notable because RECOM positions it against wider conventional alternatives, with high efficiency, push-in terminals, audible-noise suppression and tighter panel utilization. Filmed at Embedded World 2026 in Nuremberg, the discussion ties together GaN, thermal design, EMC filtering, PMBus telemetry and mechanical standardization in a way that feels very relevant to current embedded power design.

Overall, this is less about one isolated launch and more about RECOM’s broader direction: higher power density where GaN makes sense, digital control at higher wattage, and space-efficient AC/DC form factors for embedded and automation installs. The useful takeaway is that smaller magnetics, slimmer DIN-rail geometry, conduction-cooled kilowatt supplies and drop-in board upgrades are all converging toward the same goal: more power in less volume, with fewer compromises in certification, thermal behavior and integration effort.

source https://www.youtube.com/watch?v=-hISqLa3kmg

Thistle Technologies Edge AI Security, Secure Boot, OTA Updates, Model Signing

Posted by – March 13, 2026
Category: Exclusive videos

Thistle Technologies is tackling a familiar embedded problem: the industry knows what strong security should look like, but secure boot, signed firmware, encrypted updates, hardware root of trust integration, and key handling still take too much board-specific work for most teams. This interview explains how Thistle is trying to compress that effort from months into hours by giving device makers one platform for secure boot enablement, OTA orchestration, firmware signing, release control, and now protected Edge AI model deployment. https://thistle.tech/product

A key point here is that AI models on embedded devices now need the same trust chain as firmware. Thistle’s approach is to sign, encrypt, version, and verify models back to hardware so the device can confirm it is running the intended model rather than an injected or tampered payload. That matters for Edge AI pipelines where models change frequently, but provenance, integrity, and anti-extraction controls have to stay intact across deployment and update cycles. Embedded Computing Design’s 2026 Best in Show coverage frames this as hardware-anchored trust, model signing, provenance tracking, and protected delivery for Edge AI systems.

The demos make that concrete across very different hardware classes: small MCU-scale targets, Linux systems, Qualcomm platforms, MediaTek designs, and boards using Infineon OPTIGA Trust M. What stands out is the unified control plane: one backend for secure OTA, encrypted firmware bundles, model rollout, and version management across heterogeneous fleets. Thistle’s own product material also highlights CI/CD-oriented release tooling and Cloud KMS-backed signing flows, which fits well with what is shown in the interview about practical key management instead of passing secrets around on laptops or USB sticks.

Another layer in the discussion is regulation. The video was filmed at Embedded World 2026 in Nuremberg, where security and lifecycle maintenance were major themes, and Thistle explicitly connects its stack to Europe’s Cyber Resilience Act. That alignment makes sense: CRA preparation is pushing manufacturers toward secure-by-design architectures, authenticated updates, vulnerability handling, and long-term maintenance for connected products. In that context, the value here is not a vague “security platform” pitch but a workflow that ties silicon security features, software release discipline, and field update reliability into one operational path.

The most interesting part of the conversation is also the most realistic one: nobody claims 100% security. Instead, the argument is that embedded systems controlling physical processes, infrastructure, robotics, and safety-relevant equipment can no longer accept weak boot chains, ad hoc signing, or unsecured model refresh. For teams shipping connected products with Edge AI, this is really about reducing attack surface while keeping deployment practical: secure boot, encrypted OTA, hardware-backed key custody, model verification, and fleet-wide update management brought into a single repeatable flow.

source https://www.youtube.com/watch?v=dbkKcFbHaOw

RECOM discrete DC/DC solutions, isolated power ICs and SMD transformers explained

Posted by – March 13, 2026
Category: Exclusive videos

RECOM is broadening its power portfolio beyond classic modules and into discrete isolated DC/DC building blocks, giving design teams a more flexible path from concept to production. The key idea in this interview is not just component availability, but a structured design flow built around matched power ICs, SMD transformers, and ready-made discrete reference solutions. Instead of forcing engineers to choose between a fully integrated module and a fully custom analog design from scratch, RECOM is positioning itself in the middle with pre-matched combinations that remove much of the uncertainty from isolated power design. https://recom-power.com/

What makes the concept interesting is the “your design, your choice” approach. An engineer can start with only the IC, select an IC plus a validated matching transformer, or order a complete discrete low-power isolated DC/DC implementation prepared by RECOM. That matters because transformer-driver matching is often where discrete converter design becomes slow and risky, especially when magnetics, topology, isolation constraints, and board-level integration all have to line up at once.

The technical focus is clearly on low-power isolated DC/DC conversion, where the interplay between the controller IC and the transformer largely defines whether the design behaves properly. RECOM highlights very small ICs, compact SMD transformers, and board-level discrete solutions that can be tested directly in an application. This gives developers a way to evaluate isolated converter behavior, tune system requirements, and decide whether a modular converter, a semi-custom discrete stage, or individual discrete parts is the better fit for cost, layout, and product differentiation.

The main value proposition here is speed. RECOM says it can deliver a ready discrete solution within 20 days, which shifts the conversation from pure component sourcing to design acceleration and faster time to market. For embedded developers working on industrial, communications, automation, or edge electronics, that can be more important than squeezing out a marginal efficiency gain, because the real bottleneck is often engineering time, validation effort, and getting hardware into the field quickly. The video was filmed at Embedded World 2026 in Nuremberg, where this launch was presented as a bridge between RECOM’s established module business and a new discrete power strategy.

Overall, the story is about giving engineers more control without pushing all the risk back onto them. RECOM is using the know-how it built through years of DC/DC module design and exposing part of that expertise through matched IC-transformer pairs and pre-built discrete solutions. That turns isolated power from a slow, magnetics-heavy design exercise into something closer to a configurable platform, which is a notable shift for teams that need isolation, compact SMD implementation, and faster prototyping without abandoning the option of deeper customization later on.

source https://www.youtube.com/watch?v=f6SsrygbdEk

Renesas RH850/U2B at Embedded World 2026, Motor Control, FFT, Zonal Controller

Posted by – March 13, 2026
Category: Exclusive videos

Renesas is showing a very practical side of the RH850/U2B here: how an automotive MCU can tackle a noisy BLDC motor with visible torque ripple, vibration, and cogging, then smooth it out with a dedicated compensation algorithm. Instead of framing motor control as an abstract benchmark, this demo makes the effect easy to hear, feel, and measure through the FFT view and the before/after response of the system. https://www.renesas.com/en/products/rh850-u2b

The key technical point is hardware offload. In this setup, the compensation workload runs on the RH850/U2B embedded hardware accelerator rather than relying only on the main CPU cores, which cuts the control cycle time from roughly 15.4 microseconds to about 5 microseconds. That kind of latency reduction matters in inverter and motor-control loops because it improves response, reduces ripple, and helps push precision further at low speed where cogging effects are easy to notice.

What makes the demo more relevant than a simple motor-control board is where Renesas positions the device. RH850/U2B is part of its cross-domain automotive MCU family, aimed at zonal controllers and unified ECU designs where motor control, safety, security, and real-time processing increasingly need to coexist on one device. The discussion around ASIL certification, EVITA Full capability, multi-core processing, and lockstep support places this clearly in the context of modern vehicle E/E architecture rather than a standalone industrial drive demo.

Filmed at Embedded World 2026 in Nuremberg, the demo is a good example of how Renesas is linking motor-control quality to broader automotive compute trends: hardware acceleration, deterministic timing, functional safety, cybersecurity, and domain integration. The result shown here is simple but meaningful: lower acoustic noise, lower vibration, faster execution, and a more efficient control path for EV, HEV, actuator, and zonal automotive applications.

source https://www.youtube.com/watch?v=7-LnA57KlGo

Yocto Project at Embedded World 2026: LTS, SBOM, BitBake, RISC-V, Embedded Linux

Posted by – March 13, 2026
Category: Exclusive videos

This conversation frames Yocto less as a single distro and more as the infrastructure layer many embedded Linux teams eventually need once products move beyond quick demos. The interview highlights why developers keep coming back to it: reproducible builds, minimal images, board bring-up, source mirroring, A/B update workflows, and a build system that only pulls in what the target actually needs. That matters for performance, maintenance, and attack surface, especially when long-lived devices are deployed in volume. https://www.yoctoproject.org/

A big theme here is maintainability over time. The speakers point to the next Yocto LTS cycle, with four years of support, as a practical answer for product teams facing long qualification windows and regulatory pressure. Security is presented in a very concrete way: SBOM generation, vulnerability scanning, CVE tracking, and the ability to rebuild images quickly when fixes land. That makes Yocto relevant not just for BSP work and image creation, but for Cyber Resilience Act readiness and ongoing fleet maintenance in the field.

What also comes through is how much of Yocto’s value sits in BitBake and the surrounding workflow rather than in any single package set. The discussion around bitbake-setup, shared sstate cache, layer configuration, and reusable board support shows why experienced engineers see it as a build framework rather than just another embedded Linux option. First builds may take time, but incremental rebuilds, cache reuse across projects, and structured metadata make the system much more scalable once teams are juggling multiple products, branches, and hardware targets at once.

The interview also gives a useful view of Yocto’s hardware reach. ARM is treated as routine, cross-compilation is normal, and RISC-V now feels more strategic than experimental, with community layers, board support, and stronger testing infrastructure getting more attention. There is also an interesting hint that Yocto thinking may spread beyond classic embedded targets, especially through meta-virtualization, container image construction, multi-architecture builds, and ultra-small deployable runtimes where provenance and SBOM detail matter a lot.

Just as important, this is a story about community process. The speakers are candid about what works well and what still needs refinement, from mailing-list driven contribution flow to newer GitHub-style expectations, and from volunteer patch flow to paid maintainers, release management, and LTS coordination funded by members. Filmed at Embedded World 2026 in Nuremberg, the video ends up showing Yocto as a mature, open, vendor-neutral build ecosystem for embedded Linux, where security, reproducibility, board enablement, and long-term support are all tied together in one stack.

source https://www.youtube.com/watch?v=YPjoayYbosQ

Renesas RZ/V2H and RZ/V2N Robotics Demo, Gesture AI, Voice Control, ROS 2

Posted by – March 12, 2026
Category: Exclusive videos

Renesas uses this demo to show how edge AI is moving from simple vision classification into closed-loop robot control. The first setup combines an off-the-shelf dexterous hand with an RZ/V2H board, where a camera tracks human hand gestures, runs local inference, and maps the result to motors and axes so the robot hand mirrors the operator in real time. It is a practical example of embedded vision, gesture recognition, motor control, and low-latency human-machine interaction coming together on one platform. https://www.renesas.com/en

What makes the RZ/V2H part interesting here is not just raw AI throughput, but the system balance behind it. Renesas positions it for robotics and vision AI with multicore processing, DRP-AI acceleration, image-processing capability, and support for multiple camera streams, which fits workloads such as hand tracking, perception fusion, and coordinated motion. In this context the demo is less about a robotic hand alone and more about how sensor input, inference, and actuator control can be collapsed into a compact edge robotics design.

The second demo shifts toward collaborative robotics and tool assistance. Here, a robotic arm based on the RZ/V2N platform accepts both voice commands and hand gestures, running in a ROS 2 architecture to identify a requested tool, move to the right position, and present it to the operator. That makes the story broader than vision AI: it becomes a multimodal interface problem involving speech, gesture, robot middleware, task flow, and safe human-robot collaboration on the edge.

MXT’s role adds another useful layer, because this is not only a silicon story but also an ecosystem story. As a Renesas preferred partner, MXT has worked with Renesas across modules, evaluation kits, and custom boards, and the board shown here is described as a Raspberry Pi form factor design that can work with existing expansion hardware. That matters for faster prototyping, easier integration, and lower friction when developers want to move from proof of concept to a more product-like robotics platform.

Seen from Embedded World 2026 in Nuremberg, these demos reflect where industrial and service robotics are heading: more cameras, more AI models, more joints, more natural interfaces, and tighter integration between Linux, ROS 2, vision pipelines, and motor control. The most useful takeaway is not hype around humanoids, but the way Renesas is stacking practical building blocks for gesture-controlled manipulators, voice-driven cobots, and embedded robot perception where latency, power, and system cost still matter.

source https://www.youtube.com/watch?v=-9ba3hnz_ek

Renesas Robotics Sensor Tech at Embedded World 2026, Edge AI, Force Sensing, Predictive Maintenance

Posted by – March 12, 2026
Category: Exclusive videos

Renesas frames this demo around sensing as a core building block for edge AI, robotics, mobility, and industrial automation. The focus is not on one isolated component but on how force sensing, position sensing, impedance sensing, and low-footprint embedded intelligence can be combined into compact actuator and HMI designs that are precise, robust, and realistic to scale in production. https://www.renesas.com/IPS

The robotic hand is a good example of that direction. Instead of simple fingertip touch, the demo shows full-finger force measurement, so grip strength and the force curve over time can be tracked as the grasp develops. That matters for dexterous manipulation, safe human-robot interaction, and more natural motion control, where the system must regulate pressure finely enough to hold fragile objects without instability or slip.

A second theme is robotic joint feedback. Renesas positions inductive, magnet-free sensing as a practical fit for humanoid and industrial robot joints because it can deliver absolute position information, high resolution, immunity to stray magnetic fields, and better robustness against moisture, vibration, dust, and electromagnetic disturbance. That lines up with the company’s newer inductive position sensor push, including parts such as the RAA2P3226 for robotic joints, where compact integration, low latency, and tight angular accuracy are critical for servo control and coordinated motion.

The mobility demo extends that sensing approach into the human-machine interface. The scooter handle detects whether both hands are present using impedance sensing rather than conventional capacitive touch, which improves operation with gloves and in humid or wet conditions. Renesas is also emphasizing more complete reference algorithms around these sensors, so OEMs can tune sensitivity and recognition behavior in software without starting from scratch, which is often what product teams need when time-to-design is tight.

The final part of the video is about edge intelligence in a more literal sense: sensor data processed locally on a modest 32-bit microcontroller to infer things that are not directly measured, such as leakage, friction, or load change for predictive maintenance. That is a useful distinction in industrial sensing because it keeps latency, memory demand, power budget, and system cost under control while still enabling condition monitoring. Filmed at Embedded World 2026 in Nuremberg, the demo shows Renesas pushing sensors beyond raw measurement toward embedded perception for robotics, micromobility, and Industry 4.0.

source https://www.youtube.com/watch?v=qjhmr43MScA

Lantronix Open-M 720G/520G drone AI compute, thermal imaging and Pixhawk integration

Posted by – March 12, 2026
Category: Exclusive videos

Lantronix is showing how a compact edge-AI compute module can turn a drone platform into something closer to an OEM-ready reference design than a simple demo. The focus here is the new Open-M 720G and 520G system-on-modules based on MediaTek Genio 720 and 520, aimed at getting UAV developers from evaluation to flight tests quickly with onboard vision, control and sensor integration in one low-power stack. https://www.lantronix.com/products/open-m-720g-520g-som-system-on-module/

What makes this interesting is not just the module itself, but the system architecture around it. In the demo, Lantronix ties the SOM into a FLIR thermal camera path and a Pixhawk flight controller, creating a practical platform for inspection, surveillance and infrastructure monitoring. That matters because drone makers often need a starting point that already solves camera I/O, flight-control interfacing and edge inference, so they can spend more time on mission logic, autonomy and payload design.

Technically, the Genio 720 and 520 class stands out for delivering up to 10 TOPS of AI performance in a very constrained power envelope. Lantronix positions the platform at roughly 4 to 10 watts for typical usage, which is a meaningful number in UAV design where propulsion already dominates the energy budget. The point is not raw benchmark leadership, but usable on-device AI without the thermal and battery penalties that come with moving to 20, 30 or 40 watt compute tiers. For drones, that tradeoff can decide whether a mission lasts close to an hour or drops toward the 20 to 30 minute range.

The 720G and 520G mainly separate on imaging capability rather than core AI class, with the 720G supporting more camera processing through a dual-ISP style configuration while the 520G fits simpler single-ISP designs. That makes the pair relevant for manufacturers building regional alternatives to DJI-style platforms, especially where thermal imaging, multi-camera sensing, operator-assisted autonomy and fleet workflows matter more than consumer drone features. Filmed at Embedded World 2026 in Nuremberg, this interview is really about edge compute efficiency, modular drone design and how low-power AI silicon is becoming a practical foundation for industrial UAVs.

source https://www.youtube.com/watch?v=BBdLp7FBkd4

Innocomm MediaTek Genio 360P Multi-Camera Edge AI, DMS and Gesture Recognition

Posted by – March 12, 2026
Category: Exclusive videos

Innocomm presents a practical edge AI vision platform built around the MediaTek Genio 360P and Genio 360, showing how a system integrator and module maker can turn a reference SoC into a deployable multi-camera product. The demo is less about a single benchmark and more about system balance: camera input, AI pipeline scheduling, thermal behavior, and a usable module strategy for OEM and embedded designs. https://www.innocomm.com/

What stands out in this setup is concurrent inference across four camera streams with six computer-vision workloads running on one device. The applications mentioned in the demo cover driver monitoring, face detection and face matching, pose estimation, fall detection for elderly-care scenarios, gesture recognition, object detection, and missing-item or left-behind-belonging detection. That makes the platform relevant for smart mobility, public-space analytics, safety systems, and AIoT endpoints where several perception tasks need to run in parallel rather than one at a time.

The technical story is also about resource management. On screen, the demo exposes frame rate, compute loading, and temperature while models are enabled or disabled, showing how performance can be redistributed dynamically across workloads. That matters in real deployments, because edge AI products live or die by sustained throughput, memory bandwidth, and thermal envelope, not just peak TOPS figures. Around the Genio 360 family, MediaTek is positioning a 6nm edge AI platform with a hexa-core CPU architecture and integrated NPU capability, while Innocomm extends that into modules and standard products that also span MediaTek Genio 720 and 520 options for broader design scaling.

Rather than presenting AI as a vague feature, this video shows a fairly concrete embedded vision stack: multi-camera input, real-time inference, modular hardware, and deployable use cases with clear commercial logic. Filmed at Embedded World 2026 in Nuremberg, it gives a good look at how MediaTek ecosystem partners such as Innocomm are packaging edge perception into evaluation kits and modules that can move from demo to product with relatively little architectural change.

source https://www.youtube.com/watch?v=Zt8BUChd38E

Linaro CoreCollective at Embedded World 2026, ONEBoot, AMI Meridian, Yocto, Arm firmware lifecycle

Posted by – March 12, 2026
Category: Exclusive videos

Linaro’s demo focuses on something that usually stays invisible until it breaks: firmware lifecycle management on Arm devices. The discussion here is about making BIOS and boot firmware less of a one-time “flash and forget” step and more of a maintained software layer, with repeatable build, test, verification, SBOM tracking, vulnerability management, and long-term updates for devices running either Linux or Windows on Arm. https://www.linaro.org/

A key point is the split between ACPI-based firmware for Windows on Arm and Device Tree based firmware for Linux, and how Linaro and AMI are trying to manage both from one workflow. The demo combines AMI Meridian, Aptio V UEFI, and Linaro ONEBoot on the same ADLINK OSM-IMX93 platform, showing how a single board can boot Windows 11 IoT or Yocto Linux while keeping the firmware path standardized, security-aware, and easier to maintain over time.

That matters because firmware sits below the operating system and carries higher privilege than user space or even the kernel. If the firmware layer is weak, OS hardening only goes so far. The interview makes that practical: CVE monitoring, SBOM generation, software supply chain visibility, and CRA-oriented compliance are no longer just enterprise server topics, but increasingly part of embedded and IoT product maintenance. This video was filmed at Embedded World 2026 in Nuremberg, where that regulatory angle is clearly shaping how vendors present embedded platforms.

The other thread in the video is Linaro’s broader services model around Arm software enablement. Beyond firmware, the booth also covers Yocto build analysis, license and IP compliance, upstream kernel support, virtualization with virtio, and practical pathways for keeping deployed products supportable in the field. The newly launched CoreCollective also comes up as a free-to-join industry forum backed by Arm, intended to gather OEMs, ODMs, silicon vendors, and software stakeholders around shared engineering problems rather than isolated one-off fixes.

The final section on training is also worth noting because it connects theory to real hardware. Linaro is rebuilding its training offering around firmware, TF-A, U-Boot, Linux kernel, and Yocto, with remote lab access through its automation appliance, serial console, remote power control, OTG boot, and camera-monitored boards. That makes the pitch broader than a firmware demo alone: standardized boot flows, upstream-first engineering, CRA readiness, and hands-on enablement for teams building Arm products that need to stay secure and maintainable after shipment.

Linaro Unified Firmware Lifecycle, ONEBoot, AMI Meridian, Windows and Linux on Arm
Linaro ONEBoot and , SBOM, CVE and CRA compliance

source https://www.youtube.com/watch?v=aRIs9YZfkH0

Forlinx Edge AI on i.MX 95 and Ara240, RK3588 Multi-Camera Vision, Modular SoMs

Posted by – March 12, 2026
Category: Exclusive videos

Forlinx presents itself here as more than a module vendor. The interview is really about how an embedded hardware company is moving up the stack into edge AI integration, combining SoM design, carrier boards, manufacturing, software enablement, model conversion, and deployment support. The main message is that Forlinx wants to shorten the path from silicon vendor roadmap to a production-ready embedded AI platform, whether the target is industrial vision, smart gateways, robotics, or local multimodal inference. https://www.forlinx.net/

The headline demo pairs an NXP i.MX 95 platform with the Ara240 M.2 AI accelerator, creating a hybrid edge AI system that mixes the i.MX 95’s local vision, graphics, security and low-power processing with an external 40 eTOPS accelerator for larger models. In the discussion, that translates into local image understanding and natural-language analysis without relying on cloud inference, including a 7B-class LLM workflow and token generation around 20 tokens per second. That combination is interesting because it shows a practical split between on-chip NPU inference and a higher-throughput PCIe add-in path for generative AI at the edge.

A second thread in the video is platform scaling. Forlinx talks about using the i.MX 95’s own NPU for front-end recognition and then handing richer tasks to the accelerator, while also pointing to multi-card configurations for larger parameter counts. That makes the story less about one benchmark and more about architecture: modular edge AI, where compute can be right-sized from compact fanless designs up to multi-accelerator systems, depending on camera count, model size, latency target, and power budget.

The Rockchip side of the booth broadens that picture. RK3588 appears as a mature edge vision platform handling multi-camera workloads, PoE-connected inference pipelines, stitching, and video-centric AI optimization across encode, decode, and NPU execution. There is also a smaller RV1126B face-tracking demo showing how low-power Cortex-A53 class systems with an integrated NPU can still deliver responsive, fanless vision tasks. What stands out is not just chip support, but the engineering work behind BSP tuning, driver maturity, model adaptation, and layer-level optimization for real deployments.

Later in the video, the discussion shifts to pin-compatible module design, ODM work, early access to new SoCs, Linux support, and close collaboration with NXP, Rockchip, TI and Allwinner. That makes this less of a product showcase and more of a view into how embedded AI is being industrialized: standardised compute building blocks, faster bring-up, tighter software-hardware co-design, and a clearer route from demo to mass production. The video was filmed at Embedded World 2026 in Nuremberg, where Forlinx framed edge AI as a system integration problem as much as a silicon one.

Forlinx i.MX 95, Ara240 and RK3588 Edge AI for Vision and Local LLMs

Forlinx Embedded AI Platforms with i.MX 95, Ara240, RK3588 and

source https://www.youtube.com/watch?v=W6M4m0LBciw

Renesas 365 Launched at Embedded World 2026: MCU selection, BSP scaffolding, fleet management

Posted by – March 12, 2026
Category: Exclusive videos

Renesas 365 is presented here as a cloud-native engineering platform that tries to connect system architecture, embedded software, PCB design, and operational lifecycle management inside one continuous workflow. The core idea is not just collaboration in a browser, but persistent digital context: design intent, interface requirements, device choices, and implementation details stay linked instead of being scattered across diagrams, spreadsheets, datasheets, and isolated toolchains. That makes the discussion less about a single MCU and more about how a smart connected product is specified, built, updated, and maintained across its full life cycle. https://www.renesas.com/renesas365

The balancing-robot demo makes that concept concrete. Renesas shows how a product can begin as a system-level model, where interfaces between controller, sensors, connectivity, and peripherals become machine-readable constraints rather than static drawing objects. In the demo, Electronic System Design captures those constraints and feeds them into RA Explorer, which evaluates the RA MCU family at scale, including peripheral allocation, channel mapping, and pin multiplexing. Instead of manually checking hundreds of parts and reconciling conflicts one by one, the platform narrows the candidate list in seconds and regenerates a valid configuration when requirements change, such as adding CAN.

What stands out technically is the handoff from system model to software scaffolding. Once the device configuration is resolved, Renesas 365 can generate the basis of a board support package and assemble the low-level driver stack around the selected peripherals, including connectivity layers such as Wi-Fi. That is the real productivity claim here: not only component discovery, but carrying configuration intent downstream into embedded implementation. For MCU teams dealing with pinmux limits, package variants, and software-stack assembly, that removes a large amount of repetitive engineering work and shifts attention toward architecture, trade-off analysis, and application behavior at the edge.

The wider roadmap matters just as much as the live demo. Renesas has been positioning Renesas 365, powered by Altium, as a full electronics-system platform spanning silicon, discover, develop, lifecycle, and software, with broader lifecycle services around digital traceability, secure OTA/OTAA infrastructure, and fleet-oriented management. In the interview, that future direction also extends toward behavioral modeling, power and memory budgeting, AI-assisted code generation, debugging, and API-level access for external tools and autonomous agents. Filmed at Embedded World 2026 in Nuremberg, the conversation frames the launch as part of a larger shift from isolated EDA and firmware workflows toward a more platform-based electronics-development stack.

Another important point is openness. Renesas is clearly strongest when modeling its own silicon, but the demo also shows third-party components in the design flow, and the company describes a roadmap where partners can publish hardware, software, and subsystem models into the environment. That makes Renesas 365 less about locking engineers into a single vendor bill of materials and more about giving mixed-vendor embedded teams a shared design surface with traceable context. For anyone building software-defined industrial or IoT products, the interesting question is not whether this replaces every existing tool on day one, but how far it can reduce manual integration friction between architecture, firmware, board design, update infrastructure, and fleet operation at scale.

source https://www.youtube.com/watch?v=62XKBA4x7ts

Looking Glass musubi holographic photo frame converts photos & videos to HLD holograms Kickstarter

Posted by – March 11, 2026
Category: Exclusive videos

musubi is a new holographic photo and video frame developed by Looking Glass that converts ordinary photos and short video clips into holograms with visible depth. The device is designed as a simple consumer product that works with media people already have, including photos from phones or older scanned pictures. Conversion happens locally through a desktop application that reconstructs depth using machine learning and loads the hologram directly onto the frame. The device does not require a cloud connection or subscription and stores the media locally on the device. https://look.glass/musubi

The idea behind musubi is to make holographic displays practical for everyday use at home. Many people store thousands of photos and videos that are rarely revisited once they disappear into phone galleries or cloud folders. By transforming those flat images into holographic scenes with depth, the frame attempts to recreate moments with more spatial presence than traditional digital photo frames. Weddings, family memories, pets, and travel clips can be converted into short holographic scenes that play directly on the display.

The workflow is intentionally simple. Users connect the frame to a Mac or PC using USB-C, select photos or short video clips up to thirty seconds long, and run the conversion tool in the Looking Glass desktop software. The application generates a 3D scene from the original media and loads it into the device storage. Each frame can hold around one thousand holograms and includes a built-in speaker for video playback, allowing clips to run with sound.

The hardware includes a 7-inch Hololuminescent Display with roughly two inches of perceived depth. The frame has an internal rechargeable battery rated for about three hours of operation or can run continuously when powered through USB-C. All playback works offline once the media has been converted and loaded. The device includes simple controls for power, volume, and switching between stored holograms.

For creators and developers there are additional tools available beyond the standard workflow, including support for Gaussian splat imports as well as plugins for Unity, Unreal Engine, and Blender. Motion graphics templates for Adobe Premiere Pro and After Effects can also generate compatible holographic content. This demonstration was filmed at Embedded World 2026 in Nuremberg where Looking Glass presented musubi as a smaller consumer counterpart to its larger holographic displays used in developer and enterprise environments.

Looking Glass musubi holographic photo frame demo HLD display converts photos and videos
Looking Glass musubi holographic frame turns photos and videos into 3D holograms
Looking Glass musubi holographic display photo and video frame Kickstarter demo

source https://www.youtube.com/watch?v=3_ZKcVEi5Yk

Siemens industrial AI hub Booth Tour at SPS 2025 digital twin, copilots and agentic robots

Posted by – March 2, 2026
Category: Exclusive videos

Siemens uses this booth tour to show how its industrial AI strategy connects automation hardware, engineering software and domain-specific copilots into one digital enterprise stack. From the central Industrial AI Hub, Tsvetelina Nikolova explains how manufacturers can merge real-world production assets with a comprehensive digital twin, then run “what-if” scenarios across the entire lifecycle to optimize design, throughput and energy use. The focus is on leveraging Siemens Xcelerator, Industrial Operations X and Industrial Edge to turn heterogeneous shop-floor data into a consistent, AI-ready data fabric that spans OT and IT. https://www.siemens.com/global/en/products/automation/topic-areas/industrial-ai.html


HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.

On the design and engineering side, the tour highlights generative AI embedded directly into NX and the new family of Industrial Copilots. Here, engineers can ask natural-language questions about CAD models, get design variants for components like TV wall mounts, or have NX CAM Copilot propose optimized toolpaths for complex parts. The Engineering Copilot TIA, tightly integrated with TIA Portal, lets automation engineers describe intents instead of writing or searching through PLC code, automating configuration tasks and documentation across projects. ([Siemens Press][3]) This reduces repetitive work, accelerates commissioning and makes it easier for new engineers to contribute quickly to established control architectures, improving productivity across the engineering workflow.

In operations, the video zooms in on Insights Hub, Siemens’ industrial IoT platform that aggregates sensor, PLC and MES data and exposes it through dashboards and a built-in copilot. Operators can use conversational queries to check stock levels in the MES, configure machines for short product runs with multiple variants, and orchestrate workflows textually rather than through custom scripts. The same data backbone feeds asset intelligence and predictive maintenance, illustrated by a BlueScope steel case where a digital twin “fingerprint” of critical assets is compared continuously with live data to detect deviations and trigger proactive interventions, avoiding roughly 2,000 hours of unplanned downtime. Together, these examples show how industrial AI copilots move from nice-to-have dashboards to closed-loop decision support that protects throughput and uptime.

The second half of the tour steps into Siemens’ “future” zone, where agentic AI and autonomous production concepts are on display. A robot cell is configured as an example of how autonomous agents could handle configuration, scheduling and execution of tasks, while an orchestrator agent coordinates specialized agents for planning, quality, logistics and energy optimization. Rather than replacing humans, Siemens positions these agentic systems as collaborators that take over low-level reconfiguration work so engineers and operators can focus on high-value problem solving, governance and safety. This aligns with Siemens’ broader push toward industrial foundation models and AI agents that can reason over engineering data, shop-floor events and business constraints across the wider industry.

Filmed in the Siemens hall at SPS 2025 in Nuremberg, the video also touches on how the company extends this experience beyond the physical stand through live talks and a persistent virtual booth. Nikolova stresses that AI-driven factories are still built around human decision-makers, with copilots and agents acting as transparent, explainable tools rather than opaque black boxes. For younger engineers, that means fewer hours on translation, documentation and repetitive configuration, and more time on creative tasks like new machine concepts or process improvements. The result is a glimpse of how industrial AI, digital twins and autonomous agents may reshape factory work over the coming years, while keeping human expertise firmly at the center of the experience online.

source https://www.youtube.com/watch?v=FpkAXAdHaEI

nVent Google Project Deschutes 5.0 CDU, CX121 liquid cooling for AI data center racks Nvidia GB300

Posted by – February 25, 2026
Category: Exclusive videos

nVent is positioning its liquid cooling portfolio as core infrastructure for AI and high-performance data centers, starting with the new Project Deschutes 5.0 coolant distribution unit based on Google’s open OCP specification. The unit is a 2 MW, 500 gpm, high-pressure liquid-to-liquid CDU with N+1 sealless pumps, low-harmonic VFD drives and 3°C approach temperature, engineered to support Google’s seventh-generation TPU “Ironwood” and other high-density chips at scale while staying within tight thermal envelopes and electrical constraints. https://www.nvent.com/en-us/data-solutions/liquid-cooling


HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.

In the video, Matt Archbald walks through how this Deschutes CDU is tuned to Google’s operating point: single-phase water-based secondary loops, up to 65–80 psi design pressure, and about 60 kW of electrical input to move roughly 2 MW of heat away from the racks. Ultra-low harmonic drives allow the CDU to share the same power rails as the IT load, avoiding extra electrical infrastructure. N+1 filtration with individually isolatable filters means maintenance can be done live, without shutting down the CDU or impacting TPU clusters and other liquid-cooled nodes. Variable-speed pumps make it possible to support lower-pressure environments and non-Google chips simply by shifting along the pump PQ curve.

nVent also shows the CX121, a “for the masses” row-based CDU platform in the 1.5–1.75 MW range with three pumps and true N+1 redundancy at a 4°C approach. The CX121’s power architecture is configurable as single, dual, three-feed or four-feed, enabling architectures such as “4 feeds makes 3” for Nvidia and AMD racks and reducing the need for extra CDUs as pure failover. Liquid quality monitoring, leak detection and telemetry are built in, while pump modules bundle pump, filter and drive into a hot-swappable 750 lb cartridge that can be changed by a single technician in under half an hour, keeping service windows short in large AI data halls.

Around the booth, the discussion zooms out to full-stack infrastructure. nVent demonstrates smart power distribution units with polling and streaming telemetry, enabling power analytics, threshold-based smart alerting and automated load shedding when temperatures or currents approach critical limits. On the thermal side, the portfolio spans in-rack CDUs for Nvidia MGX, GB200 and GB300 configurations, OCP OV3 and enterprise racks, rear-door heat exchangers for capturing residual air-side heat, and a liquid-to-air sidecar that uses the data hall’s airstream when facility water is not available. Overhead, the modular Technology Cooling System (TCS) network uses pre-defined manifold lengths, flexible interconnects and seismic bracing so coolant distribution to each rack remains resilient and easy to commission.

Filmed at Supercomputing 2025 (SC25) in St. Louis, the conversation also touches on lifecycle management and environmental impact. nVent emphasizes closed-loop secondary circuits that avoid evaporative losses, glycol-based formulations tuned to approach the thermal performance of water, and a partnership with Valvoline for global fluid supply, monitoring and end-of-life recycling. By standardizing around open specifications like Google’s Project Deschutes 5.0 and combining CDUs, manifolds, sidecars and rear doors with services and telemetry, nVent presents a coherent path to megawatt-class racks, 500 W+ chips and hybrid air/liquid deployments without requiring a complete data center rebuild.

Publishing 50+ videos from Supercomputing 2025 (SC25, St. Louis), and from other recent events, about 4 per day at 5AM, 11AM, 5PM and 11PM CET/EST.
Join https://www.youtube.com/charbax/join for early access to all my queued videos early.

Watch my full SC25 playlist:
https://www.youtube.com/playlist?list=PL7xXqJFxvYvihnaq98TO55Cbe2VMD9mk8

source https://www.youtube.com/watch?v=l9l5m4y8zYg

Layer Canvas square QLED: 10,000 zones + Dragonwing QCS8550 GPU, live generative art

Posted by – February 21, 2026
Category: Exclusive videos

Layer presents Canvas as a combined hardware display and curated content platform aimed at museum-grade digital art: a square-format QD/QLED panel with a dense miniLED backlight and 10,000 individually controlled local-dimming zones, tuned for high contrast, low blooming, and a matte, low-reflection front surface so the image holds up from wider viewing angles. The core idea is to treat the screen like a true “canvas” for generative and code-based work rather than a TV that sometimes shows art. https://layer.com/canvas


HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.

A big part of the demo is that many pieces aren’t pre-rendered video loops: they run live, with every pixel computed on the GPU in real time, which lets artists expose parameters (amplitude, exponent, speed, palette logic, randomness seeds) for near-infinite variations without repeating the same frame sequence. Viewers can trigger variants wirelessly from a phone or tablet, and some works can ingest ambient data so the art responds to context rather than staying fixed.

On the display side, the 10,000-zone approach behaves like “selective darkness”: backlight zones behind black areas shut down while only the lit regions stay energized, pushing perceived black levels closer to emissive displays while avoiding some long-term burn-in concerns that matter for always-on installation. The unit also integrates light sensors to auto-match room brightness for day/night usage, and that sensor data can be made available to the artwork logic for adaptive behavior in situ.

The system uses a Qualcomm Dragonwing compute platform (highlighted at the Qualcomm booth during CES), chosen for a strong GPU pipeline while staying quiet enough for living spaces and gallery installs. Layer positions the form factor itself as part of the art argument: a square aspect ratio that breaks away from 16:9 “TV framing,” plus minimalist industrial design (CNC aluminum back, clean installation options including wire-hanging so it can float in a space for 360-degree viewing).

The catalog focus is moving, generative digital art with an AI-driven curation mode that rotates pieces through the day and learns viewing preferences without leaning on invasive camera tracking; early experiments included mmWave-style presence sensing, but the approach discussed here favors privacy-safe signals (like Bluetooth presence and crowd-level heuristics) instead. The conversation also ties back to a long history in online digital-art communities and the push to make digital art displays feel native in galleries and homes, with this interview later echoed in the broader display conversation at ISE 2026 Barcelona as generative content meets high-end panel engineering in a single wall unit.

I’m publishing about 50+ videos from ISE 2026, check out all my ISE 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjUiepj5jbL6aIt6QB9jeCk

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 )

“Super Thanks” are welcome 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=J3uXdtjk54s

Bang & Olufsen Beo Grace: 12mm titanium driver, IP67, ANC, Dolby Atmos, NearTap

Posted by – February 19, 2026
Category: Exclusive videos

Bang & Olufsen’s Beo Grace is positioned as a true-wireless earbud where industrial design is treated as part of the acoustic platform: pearl-blasted aluminium housing, jewellery-like fit, and a compact case that’s meant to travel without looking like generic plastic. In this video, the focus is on how the physical build (materials, tolerances, seal) supports both comfort and consistent sound delivery. https://www.bang-olufsen.com/en/int/earphones/beograce


HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.

On the audio side, Beo Grace uses a 12 mm titanium dynamic driver and tuning that targets a clean, high-resolution presentation while keeping low-end controlled rather than boosted. The demo leans heavily on active noise cancellation performance, describing a “full isolation” effect, plus a transparency mode that’s meant to stay natural instead of sounding like a boosted microphone feed. Dolby Atmos spatial audio is also part of the story, aiming to widen the image and create a more externalised stage.

Interaction design matters here: instead of only relying on tiny buttons, Beo Grace uses touch controls and proximity-based gestures (often referenced as NearTap-style control) for volume and playback, so you can make adjustments without breaking the seal. The earbuds are rated IP67, which is unusually high for premium in-ears, and the aluminium charging case keeps the same material language as the earbuds rather than switching to coated plastic. The segment was filmed at ISE 2026 in Barcelona, so it’s presented in a show-floor context rather than a quiet studio.

A useful technical footnote is that ultra-premium earbuds still live within power and size limits: reports around Beo Grace point to shorter playback time with ANC enabled than mainstream rivals, while focusing on longevity via battery-management design (including partnerships around battery health and cycle life). If you’re comparing it to other B&O in-ears, think of Beo Grace less as a feature checklist and more as a materials + acoustics + ANC package aimed at a very specific listening workflow.

I’m publishing about 75+ videos from ISE 2026, check out all my ISE 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjUiepj5jbL6aIt6QB9jeCk

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 )

“Super Thanks” are welcome 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=NzbDddd3BcU

Bang & Olufsen ISE 2026 Booth Tour: Landscape speaker, BeoLiving Intelligence, Atelier, Beolab 90

Posted by – February 14, 2026
Category: Exclusive videos

Bang & Olufsen’s booth walkthrough is a compact tour of how the brand thinks about architectural audio: treat speakers, TVs, and control as one connected system, then let integrators extend it into more rooms and more zones without changing the user experience. The focus is on multiroom orchestration, consistent latency/level behavior between zones, and making the “control layer” feel as deliberate as the industrial design. https://www.bang-olufsen.com/


HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.

A headline prototype on display is the forthcoming landscape speaker concept, positioned as Bang & Olufsen’s first fully in-house outdoor architectural speaker. The idea is to carry the same platform approach outdoors (gardens, terraces, hospitality courtyards) so an exterior zone behaves like any other room in the ecosystem, rather than a separate, bolt-on audio island. The discussion naturally lands on weather-facing materials, mounting options, and how integrators would specify outdoor coverage patterns alongside indoor listening areas.

A second thread is Bang & Olufsen Atelier, shown as a specification tool for bespoke finishes and one-off styling choices—useful when you’re matching wood, anodised aluminium tones, textiles, or architectural palettes. In practice, it’s about giving architects and installers a way to keep acoustics and aesthetics aligned, especially when a system includes statement pieces like the Beolab 90 in special editions such as Phantom and Mirage, where surface treatment and visual depth are part of the product story.

On the integration side, the booth also highlights BeoLiving Intelligence as the automation bridge between B&O products and wider smart-home ecosystems. The demos lean into real-world programming concepts—scenes, triggers, multiroom grouping logic, and feedback loops—plus newer AI-assisted programming ideas aimed at speeding up configuration and scaling to larger multiroom or commercial deployments. It’s essentially the “glue” layer that makes B&O AV behave predictably inside a broader control stack.

This video was filmed at ISE 2026 in Barcelona, and it’s a useful snapshot of what B&O is prioritising for integrators right now: outdoor architectural expansion via the landscape speaker concept, deep customisation via Atelier, high-end reference hardware like Beolab 90 editions, and system-level control via BeoLiving Intelligence and remotes such as BeoRemote Halo and BeoRemote One. The result is less about a single product and more about how a whole installation is specified, tuned, and controlled as one coherent audio experience.

I’m publishing about 50+ videos from ISE 2026, check out all my ISE 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjUiepj5jbL6aIt6QB9jeCk

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 )

“Super Thanks” are welcome 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=w8jWp_N3Mxw

Google ChromeOS enterprise update: Cameyo PWA for Windows apps, Gemini on Chromebook Plus, DLP

Posted by – February 13, 2026
Category: Exclusive videos

ChromeOS is being positioned as an enterprise-ready endpoint where AI features and security policy move together, rather than being bolted on later. In this chat, Craig Francis explains how Google is trying to tell a more complete “Gemini + security” story: Chromebook Plus devices can use on-device acceleration (Intel, MediaTek, Qualcomm class platforms) for responsive AI tasks, while heavier requests can still run in the cloud when needed. https://chromeenterprise.google/products/chrome-enterprise-premium/


HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.

A big theme is removing adoption friction for organizations that still depend on Windows-era software. The discussion highlights Cameyo by Google as a way to package legacy Windows apps on a Windows cloud server and publish them as a Progressive Web App that launches from the app icon like a native program. The point is that users don’t deal with extra logins or visible virtualization layers; they just open the app, and the session is streamed from the server behind the scenes.

The other blocker is “Microsoft-first” workflows, and the messaging here is that ChromeOS can be a practical front end even when teams stay on Microsoft 365. The idea is single sign-on with Microsoft credentials, web-first Office access, and admin-managed policies that keep identity and data consistent while avoiding the unmanaged-browser problem that shows up when employees mix corporate work with random sites and third-party AI tools. This interview was filmed at ISE 2026 in Barcelona, where enterprise AV and workplace IT themes overlap more than ever.

Chrome Enterprise Premium is framed as the control plane for that browser reality: security visibility, phishing and malware defenses, and data loss prevention rules that can reduce risky copy/paste or uploads of sensitive content into unsanctioned services, including AI tools. Put together, the pitch is less about replacing everything with web apps overnight, and more about making ChromeOS a manageable, policy-driven client that can run web workloads, virtualized legacy apps, and selective on-device AI without breaking enterprise governance.

I’m publishing about 75+ videos from ISE 2026, check out all my ISE 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjUiepj5jbL6aIt6QB9jeCk

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 )

“Super Thanks” are welcome 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=XRsgjD92Wto