Gigabyte Blackwell GB300 liquid-cooled racks SC25 NVL72 RTX PRO 6000 MGX Ampere AI HPC clusters

Posted by – November 19, 2025
Category: Exclusive videos

Giga Computing, the enterprise arm of Gigabyte, uses this booth tour to walk through its current data center stack, from ultra-dense CPU nodes to Blackwell-based GPU racks. The demo starts with a 3U direct liquid-cooled chassis packing ten blades, each motherboard hosting two independent nodes, yielding 20 single-socket servers in just 3U. Depending on configuration, nodes can be built around AMD Ryzen 7000/9000 or EPYC 4005 series, as well as Intel Xeon 6 6300-class processors, aimed at web hosting, game hosting and other high-density workloads where rack space and power efficiency matter. https://www.gigacomputing.com/en/


HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.

The tour then highlights a self-contained liquid-cooled EPYC 9005 workstation designed for quiet, desk-side AI and media workloads. CPU, memory, PSU and up to four GPUs sit on a closed DLC loop with radiator, pump and fans engineered to keep acoustic noise around 50 dB while sustaining full load. Front NVMe bays, optional M.2 boot devices, 10GbE networking and BMC remote management turn it into a compact studio or lab node for AI model development, 3D rendering or video post-production without needing data center plumbing. Filmed at Supercomputing 2025 in St. Louis, it shows how far workstation-class hardware has moved toward data center-class thermals.

On the memory side, Giga Computing shows a 48-DIMM 1U/2U EPYC 9005 platform, delivering up to multiple terabytes of DDR5 in a single node. With two DIMMs per channel and dual-socket CPUs, this class of server targets in-memory databases, caching tiers, large analytics workloads and virtualization clusters that are memory-bound rather than GPU-bound. Nearby, an 8U HGX/OAM tray separates compute and GPU tiers, supporting NVIDIA B200/B300 or AMD Instinct MI350/MI355X accelerators, with PLX switches, NVLink/NVSwitch topologies and dense front I/O tuned for training and inference clusters.

For smaller AI clusters or visualization backends, the booth introduces the NVIDIA MGX-based XL44-SX2 system populated with up to eight NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs. A built-in ConnectX-8 PCIe Gen6 switchboard with multiple QSFP 400G ports ties GPUs and network together, each CX8 ASIC wired to two GPUs and two ports, mirroring HGX-style topologies in a more compact chassis. Dual Intel Xeon 6700/6500 CPUs, dense DDR5, Gen5 NVMe bays and BlueField-3 DPU options make this platform relevant for generative AI, 3D pipelines and scientific computing in SMB data centers that don’t yet need full rack-scale Blackwell deployments.

At rack scale, Giga Computing showcases the fully liquid-cooled NVIDIA GB300 NVL72 architecture: 18 compute nodes with Blackwell Ultra B300 GPUs and Grace CPUs, nine NVLink switch trays, CDUs at the base and an OCP Open Rack busbar spine. All 72 GPUs are interconnected via NVLink so the rack behaves like a single accelerator, while facility water loops attach to the CDU heat exchangers. The tour finishes with PCIe GPU servers for H200-class GPUs, RTX PRO 6000 and Intel Gaudi 3, Xeon 6 platforms with CXL memory expansion, and AmpereOne-based servers optimized for high-core-count ARM inference. Together with the GPM management software layer for Kubernetes, Slurm and MLOps orchestration, the booth underlines Giga Computing’s push toward dense, liquid-cooled, rack-ready AI and HPC infrastructure.

Publishing 50+ videos from Supercomputing 2025 (SC25, St. Louis), and from other recent events, about 4 per day at 5AM, 11AM, 5PM and 11PM CET/EST.
Join https://www.youtube.com/charbax/join for early access to all my queued videos early.

Watch my full SC25 playlist:

Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY

source https://www.youtube.com/watch?v=uImTUVjGHFA