Locofy AI Figma to React, Next.js, Flutter design-to-code automation for frontend team

Posted by – December 8, 2025
Category: Exclusive videos

Locofy is an AI-powered design-to-code platform that turns production Figma and Penpot UI into clean, component-based frontend code for modern frameworks such as React 19, Next.js, Vue, Angular, Gatsby, HTML/CSS, React Native and Flutter. Instead of generating throwaway prototypes, its LocoAI Large Design Models focus on developer-friendly structure, responsive layouts and semantic markup that can drop straight into existing repositories Teams plug it in as a Figma or Penpot plugin, then refine behaviour in the Locofy Builder and sync directly to GitHub or a VS Code workspace https://www.locofy.ai/


HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.

Recorded at Web Summit Lisbon 2025, this interview with director of sales Shelby explains how Locofy fits into existing design workflows as a plugin rather than a black-box generator. Designers keep working inside Figma or Penpot while LocoAI tags layers, sets up interactions and responsiveness, and exposes every decision for review in an edit mode. Because Locofy generates only frontend UI code, it can be deployed in highly regulated environments, including on-premise or private cloud setups where source code control and security audits really matter to product teams.

On screen, Shelby walks through a design converted into React 19, but the same project can be exported to frameworks like Next.js, Gatsby, Vue, Angular or plain HTML/CSS, and to mobile code for React Native and Flutter, with Swift and Kotlin support on the roadmap for iOS and Android. Locofy supports team-specific design systems and UI libraries, mapping design tokens into Tailwind CSS, CSS Modules, styled components or Sass while preserving component hierarchies and props ([Locofy][2]) Generated code can be synced to GitHub, pulled into a VS Code extension, and then extended by other AI coding agents such as Gemini, Cursor, Copilot or Windsurf without breaking the underlying structure of the UI stack

Locofy uses a token-based pricing model in which each design layer consumes one token when turned into code, so a simple signup page might be around 60 layers while an Airbnb-style multi-panel screen can reach 400 layers. Shelby explains that customers typically see 60–90 percent faster frontend implementation and around five-times lower UI build costs because engineers spend less time rebuilding pixel-perfect layouts and more time on business logic and API integration. For enterprise buyers, the platform emphasises ISO 27001 and SOC 2 compliance, strict separation of customer data from model training and full ownership of all exported code for long-term maintainability within an internal budget

Looking ahead, the team is beta-testing a design optimizer that cleans up messy files which don’t follow Figma best practices so that generated code still remains predictable and maintainable. A new product called UI Pro is designed to sit alongside so-called vibe coding tools, letting developers round-trip code between Locofy and their favourite AI copilots while keeping components and props in sync. By focusing narrowly on high-quality frontend UI generation and leaving data models, backend logic and deployment to existing stacks, Locofy positions itself as a pragmatic bridge between designers, developers and the broader AI-assisted development roadmap.

I’m publishing about 90+ videos from Embedded World North America 2025, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Join https://www.youtube.com/charbax/join for Early Access to all 90 videos (once they’re all queued in next few days) Check out all my Embedded World North America videos in my Embedded World playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 )

source https://www.youtube.com/watch?v=QQin64gfcCE

Ploggit social plogging platform with GPS, teams and points for urban litter cleanup

Posted by – December 7, 2025
Category: Exclusive videos

Ploggit is a mobile social network for “plogging” – combining walking, jogging or cycling with litter collection and geotagged reporting. The app lets you mark litter spots on a map, record how many grams of trash you remove, and estimate the associated CO2 impact so that every bag you fill becomes quantified environmental data instead of just a good deed. Users can capture photos, log sessions with GPS-based activity tracking and then share their impact through an in-app feed or external social platforms, turning everyday exercise into measurable, repeatable climate action. https://ploggit.com/


HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.

Filmed at Web Summit Lisbon 2025, this interview with founder Lorenzo shows how Ploggit is designed as a lightweight environmental monitoring tool as much as a fitness companion. Each cleanup records distance, time and location while associating that trace with waste collected and CO2 saved, effectively building a distributed dataset for urban cleanliness and circular economy efforts. Over time this can reveal hotspots of mismanaged waste, enable more targeted municipal interventions and inform ESG reporting for partners that want evidence of real-world impact on the environment.

Beyond individual tracking, Ploggit emphasizes collaboration and gamification. Users can join teams, set up cleanup events and aggregate all the grams collected, CO2 avoided and points earned by participants into shared leaderboards, bringing a community dimension to local waste management. The same point system can be connected to sponsoring brands or municipalities so that reward schemes, online discounts or even civic incentives can be tied to verified cleanup activity, creating a simple mechanism for CSR campaigns and public–private engagement around environmental stewardship in the city.

Ploggit is based in Braga, Portugal, but the app is available worldwide and can be used by anyone who wants to integrate trash collection into their regular route, whether walking, running or cycling in parks, forests or dense urban streets. Built on the broader plogging trend that started in Sweden, it aligns physical activity, individual wellbeing and data-driven sustainability by turning every small pickup into traceable metrics. The result is a global, smartphone-based network where residents, companies and local governments can collaborate on cleaner public spaces and a healthier planet.

I’m publishing about 90+ videos from Embedded World North America 2025, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Join https://www.youtube.com/charbax/join for Early Access to all 90 videos (once they’re all queued in next few days) Check out all my Embedded World North America videos in my Embedded World playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=8OKjJ6_R1RQ

RedDogFish travel eSIM one profile 190+ countries, custom data plans and hold feature

Posted by – December 7, 2025
Category: Exclusive videos

RedDogFish is a people-centric travel eSIM service that lets you install a single digital SIM profile on your phone and get mobile data in more than 190 countries. Instead of roaming contracts or local plastic SIM cards, you buy data and validity days up front, assemble your own package and keep control of usage across trips. The platform runs on modern eSIM infrastructure, so activation is QR based, dual SIM friendly and designed for current iOS and Android devices. More details at https://reddog.fish/


HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.

In this interview they explain how one RedDogFish profile stays on your device while you switch between country level data packs, so there is no need to reinstall a new eSIM every time you cross a border. Travellers can mix gigabytes and days to match their itinerary, from light messaging to heavier streaming, all on a prepaid model with no recurring subscription. A distinctive feature is Hold: when you go offline, for example on a long hike or flight, you can freeze the plan and resume later instead of burning unused roaming.

Recorded at Web Summit Lisbon 2025, the team also highlights the human layer behind the service, with live support agents available to troubleshoot activation or coverage questions in real time. RedDogFish is built by a Kyiv based group of more than thirty specialists, and at launch they already had a few thousand early users testing real world routes and network combinations. Rather than chasing the very lowest price per gigabyte, they emphasise predictable connectivity, reasonable tariffs and clear policies that frequent travellers can understand quickly.

Beyond consumer travel, RedDogFish is preparing an IoT and B2B offering built on an API first architecture and a browser based cabinet for partners. The idea is that car rental platforms, travel agencies or device makers can integrate eSIM purchase flows at checkout and then monitor SIM status, remaining data and country usage from a central dashboard. Low bandwidth use cases such as car trackers, logistics sensors or remote cameras can be provisioned with tiny data bundles and controlled alongside higher volume travel profiles in the same connectivity ecosystem.

For viewers, this conversation is a concise walkthrough of how a new travel connectivity provider thinks about global coverage, tariff design and user experience at the eSIM layer. If you work with digital nomads, remote teams or travel platforms, it is a useful snapshot of what a modern, programmable data service can offer beyond classic roaming packs and physical SIM cards, and how Ukrainian engineers are reshaping this space while staying close to everyday travel needs abroad.

I’m publishing about 90+ videos from Embedded World North America 2025, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Join https://www.youtube.com/charbax/join for Early Access to all 90 videos (once they’re all queued in next few days) Check out all my Embedded World North America videos in my Embedded World playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=4itz8MlZYy0

DDN at SC25: HyperPOD, Infinia and NVIDIA SuperPOD Storage for Enterprise AI

Posted by – December 7, 2025
Category: Exclusive videos

DDN positions itself as a data infrastructure backbone for both traditional HPC and large-scale AI, drawing on more than two decades of building supercomputers with research labs, national centers and partners like NVIDIA. In this interview, Jason Brown explains how the company has evolved into a “data intelligence platform” vendor, powering GPU-dense environments from on-prem clusters to AI factories and NeoCloud providers, with a focus on high throughput, low latency and predictable scaling rather than just raw storage capacity. https://www.ddn.com/products/ddn-enterprise-ai-hyperpod/


HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.

A big part of the discussion is about AI cloud providers that operate as GPU gigafactories: CoreWeave, G42, Lambda, Scaleway and others renting GPU instances instead of generic IaaS. These environments are hitting limits not just on budget but on power, cooling and data-center footprint, so DDN optimizes for performance per watt and per rack by keeping GPUs fed from storage instead of sitting idle waiting for data. Some customers are already generating on the order of a petabyte of data per day from AI pipelines, which forces a rethink of IO patterns, metadata handling and data locality across the entire stack rather than only tuning compute.

The new DDN Enterprise AI HyperPOD is presented as a turnkey RAG and inference appliance built jointly with NVIDIA and Supermicro, essentially a pre-integrated AI data platform you roll into the rack and power on. Under the hood it combines NVIDIA RTX-class GPUs (moving toward RTX PRO 6000 Blackwell Server Edition and BlueField-3 DPUs), NVIDIA AI Enterprise services like NIM and NeMo, Supermicro AI-optimized servers and DDN’s Infinia object-scale software. Configurations span from extra-small four-GPU systems with ~0.5 PB up to 256 GPUs with over 12 PB in a single rack, giving enterprises and sovereign AI clouds a modular way to scale RAG, agentic workloads and high-throughput inference without building the pipeline themselves

Brown then ties HyperPOD back into the broader DDN Data Intelligence Platform, which unifies EXAScaler-based file systems and Infinia object storage, and is now delivered through appliances like AI400X3 and Infinia 2.x. These systems are tuned to keep GPUs 95–99% utilized by accelerating ingestion, metadata operations and KV-cache stages, rather than letting data stalls waste expensive accelerators. Features such as multi-tenant isolation, observability hooks, and integration with Spark, Hadoop and cloud services (like Google Cloud Managed Lustre) are framed as necessary plumbing so the same infrastructure can support both HPC simulation and large-scale AI training, analytics and inference on a shared platform.

Filmed at the SC25 Supercomputing conference in St Louis, the video also walks past a Supermicro HGX SuperPOD-style AI factory rack, illustrating how DDN storage slots into NVIDIA-aligned reference architectures for clusters with thousands of GPUs. At the booth, DDN demos a full RAG pipeline showing how documents flow through Infinia into an inference service, as well as a financial-services analytics demo that ingests live market and news data to generate insights in real time. The takeaway is that organizations already running DDN for HPC research or simulation can repurpose the same data platform to stand up RAG, LLM inference and other AI workloads, turning existing supercomputing environments into AI factories with consistent data management and IO behavior across deployments

I’m publishing about 90+ videos from Embedded World North America 2025, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Join https://www.youtube.com/charbax/join for Early Access to all 90 videos (once they’re all queued in next few days) Check out all my Embedded World North America videos in my Embedded World playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

source https://www.youtube.com/watch?v=GurPXGb–dE

StarNet FastX connecting Windows and Mac to Linux supercomputers, visualization and HPC GUI

Posted by – December 7, 2025
Category: Exclusive videos

FastX by StarNet Communications is a remote Linux desktop and application delivery platform designed for HPC environments where engineers sit on Windows or macOS workstations but compute on large Linux clusters and supercomputers. It provides graphical access to Linux desktops and individual applications over the network, translating traditional X11 traffic into an efficient protocol that can be consumed through native clients or a browser. https://www.starnet.com/fastx


HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.

Instead of requiring custom client installs everywhere, FastX exposes Linux desktops directly in a standard web browser using HTTPS, so any device with Chrome, Edge, Firefox or Safari can connect securely to a FastX server. The same backend can also serve native clients for Windows, macOS and Linux, while handling session persistence, reconnect, and access control in multi-user, multi-cluster deployments

In the interview, StarNet explains how this approach is used by universities, national labs, aerospace and defense organizations that run heavy visualization and CAE workloads on shared HPC clusters. Instead of pushing users toward pure command-line workflows, FastX lets them run interactive tools for seismic interpretation, scientific visualization or rich IDEs with point-and-click interfaces, even when the compute nodes sit in remote, highly secured data centers

Recorded at the SC25 Supercomputing conference in St. Louis, the demo shows a live FastX session from a server in San Jose running across a congested show network. The focus is on maintaining usable latency and frame rates over long distances, while still respecting strict IT policies around authentication, cluster access, and security domains. Many sites deploy FastX under campus-wide or site-wide licensing so researchers can attach to the same supercomputing infrastructure from labs, offices or home.

StarNet also highlights its work on supporting Wayland-based environments through the browser, aligning FastX with the ongoing transition away from X11 in many Linux distributions while still serving legacy X11-based applications. The result is a remote display layer that tries to bridge old and new Linux graphics stacks for HPC, giving system administrators a managed way to expose interactive GUI access across the planet without forcing users to abandon their existing tools

I’m publishing about 90+ videos from Embedded World North America 2025, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Join https://www.youtube.com/charbax/join for Early Access to all 90 videos (once they’re all queued in next few days) Check out all my Embedded World North America videos in my Embedded World playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=UHdRkaUtFlk

Microway GPU clusters from DGX1 to GB300 NVL72 for turnkey HPC and AI infrastructure

Posted by – December 6, 2025
Category: Exclusive videos

Microway is a long-time high-performance computing integrator focused on build-to-suit GPU clusters and workstations for AI and simulation workloads. In this interview, they explain how they work from the customer’s problem statement and constraints – performance targets, budget, power and cooling envelope, software stack – to design, integrate and deliver a turnkey system, pre-configured and burned-in for 72 hours before shipment. Their scope spans from single-node developer workstations with datacenter GPUs up to multi-thousand-GPU clusters for large government and academic sites, arriving on-site ready to plug into existing infrastructure and schedulers, with a consistent software environment from day one for researchers and engineers. More details at https://www.microway.com/


HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.

On the booth they walk through a 4U Supermicro-based GPU server configured as a dual-socket AMD platform with PCIe datacenter GPUs such as NVIDIA H200 NVL and similar NVL2 form factors. The chassis is designed to host up to eight of these GPUs in a single node, with appropriate power delivery, airflow and front-access I/O for dense AI and HPC workloads in standard data center racks. For even larger single-node configurations, they point to 5U systems that can host up to ten GPUs, which are increasingly used as building blocks in larger clusters when customers want fewer, more powerful nodes per rack rather than many smaller servers.

Microway emphasizes its long relationship with NVIDIA, being an elite partner that has been in the DGX program since its inception and has shipped large deployments of every DGX generation so far. That experience feeds into current work with datacenter GPUs such as H200 NVL in PCIe servers and preparations for rack-scale deployments of the upcoming NVIDIA GB300 NVL72, where 72 tightly coupled GPUs and Grace CPUs are used as a single reasoning and training domain. The integration work is done across OEM ecosystems from Supermicro and Gigabyte to network fabrics and storage, so that customers receive not just hardware, but a wired-up cluster tuned for their scheduler, container stack and AI frameworks.

Filmed at SC25 Supercomputing 2025 in St. Louis, this conversation positions Microway as a specialist for institutions that want to accelerate AI and simulation without building their own integration teams. Universities rolling out new academic clusters, government labs scaling to multi-rack GPU deployments and enterprises standardizing on a DGX-class architecture all get the same approach: a solution mapped to their workload mix, constraints on power and cooling, and operational model. The result is infrastructure that arrives on site pre-tested, with a consistent OS and software stack, so teams can move quickly from procurement to running real HPC and AI jobs rather than spending months in integration and debugging.

I’m publishing about 90+ videos from Embedded World North America 2025, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Join https://www.youtube.com/charbax/join for Early Access to all 90 videos (once they’re all queued in next few days) Check out all my Embedded World North America videos in my Embedded World playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=Eqv6IVLMg2A

Koolmicro liquid cooling SC25 for AI servers, GB200 cold plates and package-level IMMC

Posted by – December 6, 2025
Category: Exclusive videos

Koolmicro presents its Integrated Manifold MicroChannel (IMMC) liquid cooling for high-power AI and HPC chips, showing how a single cold plate can cover dual Nvidia GB200 GPUs plus the host CPU in one compact module. By reshaping coolant paths inside the plate instead of simply pushing more flow, the design targets low thermal resistance and reduced pump power for dense data center racks and workstations. https://koolmicro.com/


HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.

The interview explains Koolmicro’s manifold and microchannel geometry, where vertical manifolds feed short microchannels positioned directly over GPU and CPU hotspots. This produces a more uniform temperature field across the silicon and allows lower flow rates and higher inlet temperatures, which matters for power usage effectiveness and chiller design. Direct-to-chip cold plates of this type are becoming a core building block for large AI clusters, exascale nodes and other liquid-cooled HPC hardware.

To make the gains concrete, Koolmicro runs a live A/B comparison on an Nvidia RTX 5090, showing its own cold plate next to a conventional design under identical power, inlet temperature and flow. The demo reports roughly fifteen to twenty percent lower thermal resistance and about four to five degrees Celsius lower junction temperatures with the Koolmicro plate. A dedicated thermal test vehicle with 27 embedded sensors and programmable heat loads up to roughly 4.5 kW is used to characterize spatial temperature profiles and validate the cooling performance using detailed thermal mapping.

Koolmicro also outlines its IMMC roadmap: IMMC-1 as the current copper direct-to-chip plate, IMMC-2 moving liquid cooling into the package for shorter thermal paths, and IMMC-3 targeting future wafer-level cooling co-designed with semiconductor manufacturing. Their manifold microchannel structures have been demonstrated in published data at heat flux levels near 2,000 W/cm², pointing toward support for next-generation AI accelerators, data center CPUs, non-memory semiconductors and high-power optical or LiDAR infrastructure.

Recorded at the SC25 Supercomputing conference, the discussion situates Koolmicro within a broader shift toward liquid-cooled AI infrastructure and the Korean ecosystem of chip and system vendors. With headquarters in South Korea and R&D centers in Atlanta and San Jose, the team emphasizes complete liquid loops for servers and workstations—pumps, manifolds, cold plates and controls—rather than treating the plate as an isolated part. The result is a technically grounded look at how manifold microchannel liquid cooling can scale with future AI workloads while improving data center energy efficiency and cooling.

I’m publishing about 90+ videos from Embedded World North America 2025, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Join https://www.youtube.com/charbax/join for Early Access to all 90 videos (once they’re all queued in next few days) Check out all my Embedded World North America videos in my Embedded World playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=bDLnVpJLFOw

Fujitsu Quantum Computer 64 qubit chip, 256 qubits, 10000 qubits, error corrected roadmap

Posted by – December 6, 2025
Category: Exclusive videos

Fujitsu’s quantum lab researcher Joey Sha-Leo walks through a half-scale mockup of the company’s 64-qubit superconducting quantum computer, explaining how the qubit chip sits at the base of the system while layers of cryogenic electronics and filters fan out above it to route and condition microwave signals. He connects this demonstrator to Fujitsu’s broader roadmap, from its currently deployed 256-qubit system to a 1,000-qubit machine planned around 2026 and a 10,000-plus-qubit platform targeting roughly 250 logical qubits by 2030. https://global.fujitsu/en-global/technology/research/quantum


HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.

The video demystifies why superconducting quantum hardware has to look like a golden chandelier hanging inside a large cylinder. The 64-qubit chip is operated at about 20 millikelvin inside a dilution refrigerator, colder than deep space, with cascaded cryogenic amplifiers, low-pass and band-pass filters, and high-electron-mobility (HEMT) amplifiers mounted at different temperature stages. On the chip itself, qubits are laid out in an 8×8 nearest-neighbor lattice, a topology that Fujitsu already tiles into a 256-qubit device and expects to reuse as it scales wiring density and thermal management for larger systems within the same cryogenic stack.

Joey also highlights the algorithm-to-device gap: practical quantum advantage will only come when application developers and hardware designers co-evolve the full stack. Fujitsu’s research group explores chemistry and catalyst discovery workloads, quantum algorithms for optimization and data analysis, and hybrid quantum-classical workflows that can run on today’s noisy intermediate-scale devices while preparing for error-corrected machines. In this short booth conversation recorded at the SC25 Supercomputing Conference in St Louis, the focus stays on how each design decision in the hardware constrains and enables real algorithms rather than on abstract performance claims for the industry.

Finally, the discussion returns to what it means to move from 64 physical qubits today to thousands of physical qubits and hundreds of logical qubits later in the decade. By layering quantum error-correcting codes on top of a scalable lattice and carefully engineering cryogenic stability, Fujitsu aims to reach the threshold where around 250 logical qubits become available for meaningful simulations in materials science, catalysis, and optimization. The roadmap shared here is not presented as science fiction but as a staged engineering path from current prototypes toward practical, fault-tolerant quantum computing.

I’m publishing about 90+ videos from Embedded World North America 2025, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Join https://www.youtube.com/charbax/join for Early Access to all 90 videos (once they’re all queued in next few days) Check out all my Embedded World North America videos in my Embedded World playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=aHNxcV3nXWg

Supergate 128 core Arm N2 supercomputer CPU and accelerators for AI and HPC Tess T1

Posted by – December 6, 2025
Category: Exclusive videos

Supergate is a Korean fabless semiconductor design company that focuses on custom SoCs for high-performance computing, AI and autonomous driving, acting as an Arm Approved design partner for Neoverse and other Arm IP. In this interview they explain how their in-house design team builds supercomputer accelerators, multi-core CPUs and domain-specific SoCs that can be tailored to a customer’s workload, from scientific computing to automotive perception and control. Their design flow spans custom Arm Neoverse CPU complexes, bespoke AI engines and system integration into rack-scale servers, positioning Supergate as an outsourced silicon team for HPC and mobility platforms, with more detail on their SoC design offerings at https://supergate.cc/technology/soc-design/


HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.

A central part of the discussion is a large supercomputer accelerator developed as one of the first Korean-designed chips for national-scale scientific supercomputing. The accelerator combines Arm Neoverse CPU clusters with custom vector and AI engines, targeted at K-class systems where domestic IP is a strategic priority for Korea’s compute roadmap. Alongside that, Supergate shows an autonomous-driving SoC integrating a 160 TOPS MPU, designed to fuse many heterogeneous sensors and run perception and planning workloads under tight power and thermal envelopes in automotive environments, where the same IP blocks can later be re-used in edge AI infrastructure.

They also present the “Test T1” CPU chip, a 128-core Arm Neoverse N2 device fabricated on TSMC’s 5 nm process and aimed at high-performance servers and supercomputer nodes. This device targets hyper-dense Arm server blades, where many cores with coherent memory and high-speed I/O feed external accelerator cards over standard fabrics in a modular rack. The prototype boards and servers shown in the video illustrate a typical deployment pattern: Arm Neoverse host CPUs orchestrating multiple accelerator cards in each node, with Supergate providing both the silicon and the reference platforms for OEMs building HPC clusters and data-center hardware.

Beyond the individual chips, Supergate emphasizes that they deliver full-turnkey custom SoC projects, from specification and microarchitecture through logic design, verification, physical design, manufacturing and bring-up, all the way to software and board-level validation.([supergate.cc][2]) Depending on process node complexity, they quote project timescales from under a year on mature nodes to multiple years for cutting-edge geometries such as 5 nm, and they frame this as a way for system companies to turn their own ideas for accelerators into production silicon without building a large internal chip team. Filmed at the SC25 Supercomputing conference in St Louis, the conversation underlines a broader trend in HPC and automotive: custom Arm-based SoCs with tightly integrated accelerators, developed by specialist design houses like Supergate, are becoming the preferred path to differentiate both supercomputers and future autonomous vehicles while optimizing power, latency and total system cost.

I’m publishing about 90+ videos from Embedded World North America 2025, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Join https://www.youtube.com/charbax/join for Early Access to all 90 videos (once they’re all queued in next few days) Check out all my Embedded World North America videos in my Embedded World playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

source https://www.youtube.com/watch?v=iPSJX9rc-yI

Arcitecta Mediaflux at SC25 AI-ready data fabric with vector search and tape tiering

Posted by – December 5, 2025
Category: Exclusive videos

Arcitecta is a Melbourne-based data management company whose core platform, Mediaflux, sits above heterogeneous storage to give organisations a single, metadata-driven view of all their structured and unstructured data. Mediaflux acts as a rich data fabric for research and HPC environments, orchestrating tiered storage, lifecycle policies, workflow automation and long-term preservation across disk, object, cloud and tape, while its XODB metadata engine is designed to scale from billions of assets to petabyte and even exabyte-class deployments https://www.arcitecta.com/mediaflux/about/


HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.

In the video, Emily explains how Arcitecta’s internal creative team worked with international digital artists on the “worlds within worlds” concept, capturing the local national library with detailed 3D scans and turning it into a 20-minute generative artwork that wraps around the booth. The LED walls and immersive content are not just eye candy; they are a way to make abstract ideas like metadata, namespaces and data provenance more tangible, so visitors can connect emotionally with the story of data over decades rather than only hearing about capacity metrics or IOPS in isolation

Chief Operating Officer Graeme Beasley then connects this artistic narrative to concrete use cases such as Princeton University’s TigerData platform, which is built on Mediaflux and already manages on the order of 200 petabytes and hundreds of millions of research assets across working, persistent and archival tiers ([Mysite][2]) Filmed at SC25 in St Louis, the conversation highlights how a unified metadata layer lets institutions span HPC scratch, campus NAS, object stores and tape libraries while still being able to find and restore old datasets years later, even as underlying hardware and vendors change around the platform

Beyond Princeton, Arcitecta positions Mediaflux as an AI-ready data infrastructure with native vector search, policy-driven data movement and point-in-time rollback that can protect against ransomware while feeding GPU clusters with the right training data at the right moment ([HPCwire][3]) The interview hints at deployments where Mediaflux federates a trillion-file namespace, uses dense XODB metadata to drive global workflows and underpins domain-specific services like long-term cultural archives or hybrid HPC “burst” computing, giving research organisations and enterprises a way to treat data management not as an afterthought but as a core part of their scientific and analytical pipeline

I’m publishing about 90+ videos from Embedded World North America 2025, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Join https://www.youtube.com/charbax/join for Early Access to all 90 videos (once they’re all queued in next few days) Check out all my Embedded World North America videos in my Embedded World playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=-6_UTvQwPjw

Mobilint Aries Regulus edge AI NPU roadmap at Supercomputing SC25 for on-device LLM

Posted by – December 5, 2025
Category: Exclusive videos

Mobilint is a fabless AI chip company designing edge AI accelerators that target both large language models and vision workloads with tight power envelopes. In this video they walk through their Aries and Regulus product families, showing how an 80 TOPS NPU at around 25 W can be packaged as PCIe and MXM modules and even as a compact standalone AI box capable of running multi-billion-parameter LLMs entirely offline on a desk or at the edge. https://www.mobilint.com/aries/mlx-a1


HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.

The Aries NPU appears here as a PCIe accelerator and as an MXM module integrated into the MLX-A1 system, turning a small embedded PC into a powerful on-premises inference node for LLMs and computer vision. The demo includes a four-card configuration in a single chassis, effectively scaling to roughly 320 TOPS at about 100 W, which is a useful performance-per-watt point for edge servers, industrial PCs and compact AI gateways that need dense compute in a constrained platform.

Regulus targets even more constrained form factors as a full system-on-chip and system-on-module with integrated CPU, NPU and memory so that a complete mini PC class design can fit on a tiny board. With this module, Mobilint focuses on robots, drones, smart factory equipment and smart cars that must execute perception and control locally, without relying on cloud connectivity, enabling deterministic latency and privacy-preserving compute at the edge.

A key part of Mobilint’s pitch is their quantization and software stack: models are converted from 32-bit floating point to 8-bit integer with reported accuracy loss below one percent, which makes it realistic to replace very large foundation models by smaller, quantized variants while keeping most of the quality. In the demo they run an 8-billion-parameter LG LLM fully on the local accelerator and discuss how a well-quantized 7B-class model can deliver roughly 99% of the perceived performance of a model that might originally have hundreds of billions of parameters, all within the power budget of a compact edge device.

Recorded at Supercomputing SC25, the conversation also touches on architecture trends, with Mobilint arguing that future AI supercomputers and distributed systems will rely heavily on many ARM-based SoCs paired with NPUs rather than a small number of large GPUs. Their roadmap emphasizes edge servers, industrial desktops and embedded systems where dozens or hundreds of Aries and Regulus based boards can be deployed close to the data source, forming a scalable, power-efficient ecosystem.

I’m publishing about 90+ videos from Embedded World North America 2025, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Join https://www.youtube.com/charbax/join for Early Access to all 90 videos (once they’re all queued in next few days) Check out all my Embedded World North America videos in my Embedded World playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=9Ax9o-WM7BU

Microchip PIC32CM JH Dual CAN Demo with Touch PTC Motor Control at SPS Nuremberg

Posted by – December 5, 2025
Category: Exclusive videos

Microchip presents a 32-bit Arm Cortex-M0+ microcontroller platform with dual CAN interfaces and integrated touch control, aimed at compact motor-control nodes in automotive and industrial systems. In the demo, a single MCU drives two brushless fans over separate CAN channels while simultaneously handling capacitive touch input and a small display, illustrating how communication, control and HMI can be consolidated on one device. The design is representative of devices in the SAM C21 / PIC32CM JH families with 5V support, CAN-FD and a Peripheral Touch Controller (PTC). https://www.microchip.com/en-us/products/microcontrollers-and-microprocessors/32-bit-mcus/pic32-sam/pic32cm-jh


HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.

Using dual CAN from a 48 MHz Cortex-M0+ core, the board shows both synchronized and independent control of the two fans. This maps directly to use cases like automotive HVAC, where separate in-cabin blowers or zonal climate-control actuators share the same CAN backbone but need different speed profiles, as well as industrial cabinets where multiple motors or pumps are coordinated on a common fieldbus. The same architecture scales to CAN-FD networks, gateway nodes and mixed LIN/CAN topologies while maintaining deterministic motor-control loops.

The on-chip Peripheral Touch Controller replaces external touch controllers by implementing mutual and self-capacitance sensing, supporting buttons, sliders and simple gesture surfaces on the same MCU that closes the control loop. This enables sealed front panels and moisture-tolerant HMIs, which are important in vehicles, white goods and factory equipment. Running from a 5V supply with robust analog front ends and noise-tolerant PTC hardware, these parts are optimized for electrically noisy environments typical of engine bays or industrial drives, and can be integrated with RTOS or bare-metal firmware through MPLAB libraries.

Recorded at the SPS 2025 exhibition in Nuremberg, the interview also highlights the practical development flow around Microchip’s evaluation boards, including Curiosity-class kits targeting PIC32CM JH and related SAM families. Engineers can prototype dual-CAN motor-control nodes with touch and display on a single board, then migrate directly into automotive-qualified or industrial-grade variants with extended flash, RAM and functional-safety features such as ECC memory and diagnostic peripherals, reusing the same software stack from lab demo to series production.

I’m publishing about 90+ videos from Embedded World North America 2025, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Join https://www.youtube.com/charbax/join for Early Access to all 90 videos (once they’re all queued in next few days) Check out all my Embedded World North America videos in my Embedded World playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=foAdyYE5yyc

Weidmüller Cabinet Wiring Automation with SNAP IN, Robots and Single Pair Ethernet

Posted by – December 5, 2025
Category: Exclusive videos

Weidmüller uses this SPS Nuremberg 2025 booth tour to show how cabinet building can be treated as a data-driven, semi-automated production line rather than a manual craft. Robots like “Snappy” work with the Wire Processing Center to cut, strip and crimp conductors, then feed them into SNAP IN terminal blocks that lock with a single click and require no ferrules, making wiring faster and less error-prone. This interview walks through how those mechanics, terminal blocks and software all fit together in real cabinet production environments. https://www.weidmueller.com/en/solutions/technologies/snap_in_connection_technology/index.jsp


HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.

From there the tour dives into “next level cabinet building”: automated wiring cells, a rail assembler that populates DIN rails with relays and terminal blocks directly from engineering data, and guidance systems that lead operators step by step through wiring and marking to avoid mistakes. The long-term partnership with Schneider Electric and the TeSys range shows how SNAP IN can be adopted in mainstream low-voltage gear as well. A dedicated area highlights Weidmüller’s ergonomic hand tools and wire processing equipment that combine cutting, stripping and crimping in one motion, connecting today’s demo to a 175-year history that started with mechanical snap fasteners for clothing and evolved into electrical connectivity for industry.

On the automation side, the booth shows how the u-mation portfolio ties hardware and software together: u-remote I/O, web-based controllers such as u-control, and Linux-based platforms like u-OS for edge computing and Industrial IoT. Dashboards visualize energy consumption, machine status and production KPIs, while automated reporting helps factory managers see where bottlenecks or inefficiencies are. Extension modules and open interfaces let OEMs treat control hardware as a modular platform, combining real-time control with data aggregation for analytics, predictive maintenance or cloud integration. Filmed at the SPS trade fair in Nuremberg 2025, the discussion emphasizes openness, from standard fieldbuses to containerized runtime environments.

A large part of the booth is devoted to connectivity across the whole signal chain: PCB and device connectors, feed-throughs for easy maintenance without opening the cabinet, and an extensive range of field connection products. Viewers see M8, M12 and M23 circular connectors, rugged RockStar heavy-duty connectors and push-pull M12 designs that support quick, tool-less mating in harsh environments. High-current interfaces for rail and other demanding sectors illustrate how power, signal and data can be combined in compact form factors, enabling denser cabinets and more integrated machines without sacrificing serviceability.

The highlight on communication is Single Pair Ethernet, presented as the next step in Ethernet-based factory networking. Instead of eight conductors in a traditional RJ45 cable, SPE uses just one twisted pair while still supporting power over data lines (PoDL), up to 10 Mbit/s over 1 km for sensor networks and higher data rates such as 25 or 40 Gbit/s over shorter distances. The smaller connectors, simplified wiring and clear polarity reduce installation effort and error risk, while enabling end-to-end IP communication down to the field device. Combined with Weidmüller’s global engineering presence and strong German R&D base, the SPE and SNAP IN portfolios position the company as a key player in making control cabinets more modular, networked and easy to assemble.

I’m publishing about 90+ videos from Embedded World North America 2025, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Join https://www.youtube.com/charbax/join for Early Access to all 90 videos (once they’re all queued in next few days) Check out all my Embedded World North America videos in my Embedded World playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

source https://www.youtube.com/watch?v=og25iQAS488

Microchip 10BASE-T1S heat pump demo for cloud home energy over single-pair Ethernet

Posted by – December 4, 2025
Category: Exclusive videos

Ethernet path from an outdoor heat pump, through room controllers and presence sensors, up to a cloud-based home automation platform controlling residential heating. The setup shows how single-pair Ethernet and multidrop Power over Ethernet can collapse data and power onto one cable while keeping a standard Ethernet stack from edge devices to the cloud. https://www.microchip.com/en-us/products/high-speed-networking-and-video/ethernet/single-pair-ethernet/10base-t1s


HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.

Microchip uses this interview to walk through its 10BASE-T1S MPoE sustainability demo: a complete At SPS 2025 in Nuremberg, the team models a small house with two floors and a basement plus an external heat pump, all interconnected over 10BASE-T1S. Presence detectors in each room decide whether the heating system should run, and the logic is orchestrated in the cloud so that heat is only delivered when occupants are detected. 10BASE-T1S itself is a 10 Mb/s Single Pair Ethernet physical layer defined in IEEE 802.3cg, using a single balanced pair in a multidrop bus topology to connect several nodes over short distances while staying within a unified Ethernet architecture.

A key aspect of the demo is MPoE (Multidrop Power over Ethernet) based on the emerging IEEE 802.3da specification, which delivers both data and up to around 100 W of power over the same single-pair cable to all the nodes on the segment. This is particularly relevant for retrofits and constrained installation spaces, where pulling separate power lines for room controllers and sensors is expensive or impractical. By using Single Pair Ethernet with power over data lines, the wiring harness becomes simpler, lighter and easier to scale as more sensors and actuators are added across the building.

All intelligence in the demo is concentrated in a central application node; the remote 10BASE-T1S devices are purely controlled over Ethernet and do not embed their own microcontrollers or application firmware. This architecture, supported by Microchip’s LAN865x MAC-PHY and LAN867x PHY families, keeps software development, regression testing and maintenance in one place while the field devices remain simple Ethernet endpoints. ([microchip.com][1]) The result is a cleaner path from prototyping to deployment for OEMs who want to build cloud-connected, occupancy-aware heating systems and other building or industrial automation use cases on top of a standard, edge-to-cloud Ethernet stack.

I’m publishing about 90+ videos from Embedded World North America 2025, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Join https://www.youtube.com/charbax/join for Early Access to all 90 videos (once they’re all queued in next few days) Check out all my Embedded World North America videos in my Embedded World playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=lTiWShR-SGM

Lunes CNC machine tending with humanoid robots, VLM perception and VLA learning

Posted by – December 4, 2025
Category: Exclusive videos

Lunes GmbH presents how humanoid robotics can move beyond show-floor stunts into real production. CEO and founder Jeff explains how the company is spinning up a new Lunes Robotics initiative on top of its long-standing automation engineering work, using humanoid platforms alongside classic industrial robots to handle repetitive machine-tending tasks in real factories and workshops, turning spectacle into reliable toolchain for industry. https://lunes.de/


HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.

Instead of trying to solve every scenario at once, Lunes focuses on a narrow but high-impact use case: operating CNC milling machines and similar assets. The humanoid or industrial robot opens the machine door, removes finished metal parts, inserts new blanks and closes the door again, transforming a human-intensive, boring station into semi-autonomous machine tending. The goal is not to replace skilled operators, but to free them from low-skill loading cycles so they can supervise multiple cells, manage quality and optimize process uptime.

To train their team and validate their software stack, Lunes also develops internal demo cells such as a chess-playing robot. Here, an industrial robot executes the physical moves on a real board while a chess engine computes the strategy, giving junior developers a safe playground for motion planning, collision avoidance, calibration and human-robot interaction. The demo travels from fair to fair and is continuously improved, serving as a living regression test for control software and integration practice.

On the AI side, Lunes uses vision-language models (VLMs) to detect and classify workpieces, aligning camera perception with the robot’s coordinate frames. For complex grasping tasks, they apply vision-language-action (VLA) concepts: engineers teleoperate the robot and demonstrate a grip trajectory many times, effectively performing imitation learning so the model can reproduce the motion autonomously. By combining deterministic PLC-style logic with data-driven perception and learned manipulation, they aim for systems that are simple enough to certify yet flexible enough to cope with part variety and real-world tolerances, improving robustness and cycle-time performance.

Filmed at SPS 2025 in Nuremberg, this interview captures an early snapshot of Lunes’ journey from traditional automation engineering into humanoid machine tending. The focus is squarely on pragmatic deployment: integrating humanoids and industrial arms into existing CNC fleets, working with machine OEM partners, and building software that fits industrial expectations around safety, maintainability and lifecycle. Rather than chasing generic “general purpose” robots, Lunes is mapping a stepwise roadmap where each narrow use case—starting with CNC loading—earns its place on the factory floor as part of a realistic humanoid automation roadmap.

I’m publishing about 90+ videos from Embedded World North America 2025, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Join https://www.youtube.com/charbax/join for Early Access to all 90 videos (once they’re all queued in next few days) Check out all my Embedded World North America videos in my Embedded World playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=3wUqB3feEI8

Canonical Ubuntu at SPS Nuremberg

Posted by – December 4, 2025
Category: Exclusive videos


HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.

Canonical’s Oliver Graw explains how Ubuntu Core and Ubuntu Pro for Devices form a hardened embedded Linux stack for industrial and edge AI use cases, combining a real-time kernel, strict confinement via snaps, over-the-air transactional updates and long-term security maintenance on top of Ubuntu LTS releases. This gives OEMs a consistent platform from development laptop to factory floor and cloud, with the same packaging model, toolchain and security posture across their fleet. https://ubuntu.com/core

In the demo, Canonical showcases the new Qualcomm Dragonwing IQ9 platform (IQ-9075) as a compact edge AI controller certified for Ubuntu 24.04 LTS, capable of running real-time workloads and on-device inference on the same SoC. Co-engineered Ubuntu images for Qualcomm IoT platforms expose the NPU, GPU and heterogeneous compute engines to AI frameworks while still benefiting from Pro’s 10-year security maintenance, device management and access to the real-time kernel via subscription. This gives system integrators a single Linux distribution for edge inference, deterministic control loops and fleet management at scale.

Another highlight is a joint demo with Intel and congatec, where a congatec board and real-time hypervisor host three Ubuntu virtual machines: one Ubuntu Core instance running OpenVINO-based computer vision for ball tracking, one real-time Ubuntu Core instance closing the control loop for balancing, and a third VM handling HMI and supervisory control.([Canonical][4]) It illustrates how mixed-criticality workloads can be consolidated onto a single x86 platform while preserving deterministic latency for motion control and keeping user interfaces and AI pipelines isolated from the hard real-time domain. This pattern is increasingly common in factory automation, robotics and machine vision today.

Oliver also dives into Canonical’s hardware-certification pipeline, where industrial PCs and boards are tested in a dedicated lab before each kernel release to ensure driver stability, performance and long-term supportability. That process underpins Ubuntu Certified Hardware and ties directly into security frameworks like Ubuntu Pro, IEC 62443-4-1 and Canonical’s broader strategy for EU Cyber Resilience Act (CRA) compliance, so OEMs can ship connected devices with a maintained bill of materials and documented vulnerability-management processes.([Canonical][5]) Combined with OTA snaps and image-based updates, this reduces both certification risk and lifetime maintenance cost for industrial vendors.

Beyond x86 and Arm, Ubuntu is expanding across RISC-V boards and even space-borne systems, with Ubuntu 24.04 LTS images and upcoming RVA23-class releases targeting new RISC-V SoCs while still offering the familiar developer experience and security maintenance path.([Ubuntu][6]) In this SPS Nuremberg 2025 interview, Oliver reflects on two decades at Canonical, seeing Ubuntu move from a desktop distro to an embedded platform powering industrial robots, digital signage, transportation infrastructure, satellites and more, showing how a single Linux codebase can span from lab to factory to orbit.

I’m publishing about 90+ videos from Embedded World North America 2025, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Join https://www.youtube.com/charbax/join for Early Access to all 90 videos (once they’re all queued in next few days) Check out all my Embedded World North America videos in my Embedded World playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

source https://www.youtube.com/watch?v=nfV_XHSWP3c

AMC iFactoryX edge IoT for predictive maintenance with Advantech hardware and cloud dashboard

Posted by – December 4, 2025
Category: Exclusive videos

AMC iFactoryX is a modular edge-to-cloud platform for collecting and processing machine, sensor and robot data across industrial sites, built on Advantech hardware and open industrial protocols. It connects legacy and new equipment via Edge IoT DAQ gateways and remote I/O, normalizes telemetry and exposes it to on-premise or cloud analytics for condition monitoring and predictive maintenance across the factory floor and building infrastructure, keeping a single, consistent data model for all connected equipment. https://www.amc-systeme.de/amc-ifactoryx.html

In the demo, AMC simulates a compact factory with motors, breathing machines and other loads instrumented by vibration, temperature and humidity sensors, all streamed into a unified dashboard. Data can be mirrored both locally and in the cloud, so values such as vibration levels are identical in on-prem views and remote web UIs, enabling engineers to correlate events, track asset health over hours, weeks or months and move toward data-driven maintenance planning that protects uptime.

Filmed at the Advantech booth during SPS in Nuremberg, this interview shows how AMC, a long-standing Analytik & Messtechnik partner of Advantech, layers its iFactoryX software stack on top of WISE-Edge IoT and Linux-based gateways. The architecture supports industrial protocols like Modbus, OPC UA, MQTT, EtherCAT and LoRaWAN, making it possible to integrate third-party machines and brownfield equipment while scaling to thousands of edge devices without changing the overall data pipeline and visualization ecosystem.

For manufacturers cautious about the impact of digitalization projects, AMC packages iFactoryX as a starter kit co-branded with Advantech, giving smaller plants an affordable entry into Industry 4.0 while still being able to extend to large multi-line sites later. The same stack can be deployed purely on-premise for latency-sensitive use cases, or combined with public cloud for fleet-wide monitoring, alarm handling and long-term trend analysis, providing a pragmatic path from simple data logging to full predictive maintenance strategy.

source https://www.youtube.com/watch?v=kY7nXjVbRks

Advantech EdgeHub remote IO management with EdgeLink gateways, OPC UA, MQTT, LoRaWAN, WISE-2410

Posted by – December 3, 2025
Category: Exclusive videos

Advantech’s EdgeHub platform is presented as a central control plane for remote operation of industrial I/O, gateways and embedded PCs, giving OT teams a single interface for onboarding, monitoring and configuring distributed assets. It ties together ADAM remote I/O, WISE wireless modules, protocol gateways and IPCs so you can manage both edge connectivity and data flows without custom tooling, from device registration through to tag mapping and alarm handling. https://wise-iot.advantech.com/en-int/marketplace/product/advantech.edgehub

In the demo they focus on how a tenant encapsulates a licensed quota of devices and tags, with clear states such as construction versus operation, plus online and offline status for each node. A WISE-4012E/412E class Wi-Fi Modbus TCP I/O module is used as an example, exposing analog inputs, digital inputs and relay outputs that are all surfaced into EdgeHub’s dashboard so you can see live process values and discrete states in real time, instead of logging into each module individually. The result is a lightweight SCADA-style view built directly on top of wireless field I/O and edge aggregation, ready to be integrated into higher-level cloud analytics or MES platforms

In this conversation from SPS 2025 in Nuremberg, the team shows how EdgeLink-based gateways extend this model by bridging industrial controllers and fieldbuses into the same management and telemetry fabric. An ARM-based gateway and an Intel Atom x6413E-powered UNO-127 DIN-rail IPC can both run the EdgeLink runtime, acting as protocol converters that collect data from Siemens, Mitsubishi or Rockwell PLCs via Modbus and expose it upstream as OPC UA servers, Modbus servers or MQTT publishers to cloud backends or SCADA systems. LoRaWAN gateways such as the WISE-6610 can simultaneously ingest data from smart sensors like the WISE-2410 vibration node, enabling condition monitoring and predictive maintenance across widely distributed equipment. This turns EdgeHub into a focal point for mixed-protocol industrial monitoring

Beyond device onboarding and data routing, the EdgeHub UI exposes an App Hub repository and OTA pipeline so system integrators can push containerised applications, custom runtimes or configuration bundles to gateways and IPCs at scale. Apps are uploaded once into the repository and then dispatched to selected nodes, together with firmware upgrades, Windows updates or configuration files, all controlled via over-the-air workflows instead of manual site visits. This aligns with WISE-Edge365 and EdgeSync concepts, where device management, software lifecycle and telemetry are treated as one continuous edge operations process rather than separate projects

Security and lifecycle governance also feature prominently, with support for X.509 certificate-based authentication, TLS-encrypted channels and allowlisting to protect remote I/O and gateways from unauthorised access while still enabling cross-site management from a central console. For enterprises rolling out hundreds or thousands of ADAM and WISE modules, LoRaWAN sensors and EdgeLink gateways, the combination of multi-tenant licensing, protocol abstraction and secure OTA distribution makes EdgeHub a pragmatic tool for unifying OT device fleets into a coherent edge-to-cloud architecture

source https://www.youtube.com/watch?v=8VVVtDVtq5g

Shuttle edge AI PCs with Jetson Orin Nano for fanless industrial transit and smart city compute

Posted by – December 3, 2025
Category: Exclusive videos

Shuttle Computers uses this interview to present its latest fanless edge PCs built around NVIDIA Jetson Orin and Intel embedded platforms, combining compact form factors with industrial reliability for AI at the edge. Partnering with Silicon Power for wide-temperature DRAM and SSDs, these systems pair stable storage with high TOPS performance in a very small footprint, ready for deployment in demanding field environments. https://www.shuttlecomputers.com/products/spcnv03-industrial-edge-computer


HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.

In the video, Shuttle highlights its Jetson Orin Nano and Orin NX based edge computers that deliver up to around 40 TOPS and 100 TOPS of AI inference respectively, supported by LPDDR5 memory and industrial-grade NAND. The units are passively cooled with a large top-mounted heatsink, feature dual LAN, HDMI, multiple USB ports and flexible DC power input, making them suitable for real-time computer vision, sensor fusion and other latency-sensitive edge workloads on site.

Use cases discussed range from rolling-stock and transit applications to smart city deployments, where fanless and vibration-resistant systems are essential for reliability over long lifecycles. Industrial-temperature RAM and SSD modules from Silicon Power help keep the platform stable in harsh environments, while Shuttle’s embedded design focuses on EMC compliance, wide operating temperature envelopes and low maintenance in the field.

Recorded at Embedded World North America in Anaheim, this conversation also touches on Shuttle’s North and South American presence from its City of Industry office and the fact that these edge systems are already in mass production. The result is a compact edge AI platform that fits neatly into traffic monitoring, digital signage, video analytics and automation projects where quiet operation and local processing are more important than sheer rack-scale computing.

I’m publishing about 90+ videos from Embedded World North America 2025, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Join https://www.youtube.com/charbax/join for Early Access to all 90 videos (once they’re all queued in next few days) Check out all my Embedded World North America videos in my Embedded World playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=-EOQ8FX8cL8

IBM Fusion deep archive and tape robotics for HPC and AI at SC25 VMs petabyte archives data platform

Posted by – December 3, 2025
Category: Exclusive videos

IBM’s CTO for Data and AI storage solutions, Chris Meestas, walks through how IBM Fusion brings together containerized applications, virtual machines and AI workloads on a single, software-defined data platform. Fusion abstracts the underlying hardware so enterprises can run storage services on IBM-qualified systems or partner infrastructure, on-premises or in the cloud, while quickly getting to “Day 2” operations like scaling, monitoring and lifecycle management for modern data stacks. https://www.ibm.com/products/storage-fusion/


HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.

A key theme in the video is modernization without a full rip-and-replace of existing environments. Fusion is presented as a framework that overlays current compute and storage, automating deployment of container-based services and virtualized workloads while keeping policy, security and observability consistent. The result is a common control plane for data serving AI training, inferencing, analytics and more, instead of separate silos for each type of workload and each generation of infrastructure

On top of this foundation, IBM is now adding content-aware storage capabilities. Rather than only cataloging file paths and basic metadata, the system can understand “who, what, when and where” inside the stored content itself, enabling AI-style queries directly against the storage layer. Chris uses an example like asking which meetings he had at a past supercomputing conference and who he presented to; the storage stack can surface that context by inspecting the data, allowing more powerful inferencing and reducing the friction between unstructured archives and AI-driven insight

The demo also highlights IBM’s deep archive solution, which couples Fusion’s software-defined stack with high-density tape libraries. Data generated by AI and other workloads in Fusion can be tiered automatically into an S3-compatible, Glacier-like deep archive that remains searchable but optimized for low-cost, long-term retention. Chris mentions a single rack reaching on the order of 61 petabytes with robotic tape handling and aggregate throughput around 13.1 terabytes per hour, making tape a relevant option again for cold data, cyber-resilient backups and long-horizon compliance use cases where energy efficiency and durability matter more than millisecond access

Recorded at the Supercomputing 2025 conference in St Louis, this conversation positions IBM Storage as an integral part of large-scale AI and HPC infrastructure rather than a passive backend. Fusion, content-aware storage and deep archive together form a continuum from hot GPU-adjacent datasets through warm object storage down to ultra-cold tape, all managed under a common policy and orchestration layer. For architects building hybrid cloud AI platforms, the video gives a concise look at how IBM is trying to collapse complexity in the data pipeline while keeping options open across hardware, locations and scale in the wider ecosystem

I’m publishing about 60+ videos Supercomputing 2025 #SC25 I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Join https://www.youtube.com/charbax/join for Early Access to all 90 videos (once they’re all queued in next few days) Check out all my Supercomputing 2025 SC25 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvihnaq98TO55Cbe2VMD9mk8

This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=0W-NHF5d9tQ