This interview dives into NEC ExpEther, a PCIe-over-Ethernet concept that turns GPUs, NVMe, and other PCIe endpoints into a remotely attachable resource pool, so CPUs and accelerators don’t have to live in the same chassis to behave like one machine. It’s basically PCIe fabric disaggregation using Ethernet optics as the physical reach, with an eye on composable AI/HPC infrastructure and on-demand device attachment. https://www.nec.com/en/press/202511/global_20251113_01.html
—
HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.
—
In the demo, PCI Express Gen3 x16 traffic is carried over 100GbE across about 2 km of optical fiber, aiming for “close to PCIe” semantics while pushing the link far beyond normal backplane distances. You see an NVIDIA benchmark running on the remote GPU and the rendered output fed back to the host, which is a simple way to visualize how bandwidth, latency, DMA behavior, and reliability trade off when you tunnel PCIe over an Ethernet fabric here.
The practical motivation is infrastructure hygiene: put dense GPU trays where power delivery, direct liquid cooling, acoustic isolation, and physical access control are easier, while keeping CPU nodes nearer to users and sensitive data. A concrete example discussed is a campus deployment at The University of Osaka, where labs keep their own servers but shift hot, noisy accelerators into a centralized GPU pool (NEC describes NVIDIA H100NVL in the trial setup) and connect over 100Gbps fiber as needed, shown at Supercomputing SC25 in St. Louis, Missouri.
On the roadmap side, the conversation points to scaling the bridge from today’s Gen3 toward PCIe Gen5 x16, with later generations in view, which starts to look like a scheduling problem as much as a hardware one. If you can hot-plug accelerators on demand, you can tie this into cluster orchestration, device plugins, and queue-aware provisioning, so more of the AI pipeline becomes “attach what you need, run, detach” without rewriting the full stack from scratch today.
I’m publishing about 90+ videos from Embedded World North America 2025, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Join https://www.youtube.com/charbax/join for Early Access to all 90 videos (once they’re all queued in next few days) Check out all my Embedded World North America videos in my Embedded World playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga
This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK
Click the “Super Thanks” button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way 😁
Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY



