MSI uses this interview to show how its long history in graphics and client hardware is now feeding a serious push into data-center and AI infrastructure. At the booth, they focus on NVIDIA MGX-based GPU servers populated with RTX Pro 6000 Blackwell Server Edition accelerators, eight GPUs per node tied into NVIDIA CX8 networking for eight 400 GbE ports, giving dense, rack-scale inference throughput for enterprises that want to run their own language models rather than relying purely on public cloud. ([MSI][1]) These PCIe-based accelerators with 96 GB of GDDR7 per card sit in systems tuned by NVIDIA’s MGX reference design, so thermals, power delivery and PCIe lane topology are pre-balanced for sustained AI workloads. https://www.msi.com/Landing/NVIDIA-MGX
—
HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.
—
The conversation dives into how an 8× RTX Pro 6000 MGX server becomes a 7–8 kW box, implying 40–50 kW per rack when fully populated, and why that profile is ideal for high-volume inference, light training and fine-tuning of LLMs. By combining passive 600 W GPUs with CX8-class networking and 400 GbE QSFP ports, MSPs and large enterprises can build clusters that saturate NVLink-to-NIC bandwidth without resorting to HGX-class systems, at a lower total cost of ownership but still with data-center-grade density and reliability. This positions MSI squarely in the sweet spot for customers who want to bring agentic AI, retrieval-augmented generation and high-throughput model serving onto their own infrastructure.
A key highlight is MSI’s Grace Blackwell GB300 workstation, marketed as the CT60-class AI Station, which essentially brings DGX-level architecture under a desk. It combines an NVIDIA Grace ARM CPU with a B300 Blackwell GPU on a single module, linked over NVLink-C2C into one coherent memory space, with LPDDR5X on the CPU and HBM3e on the GPU adding up to roughly 784 GB of unified memory. That enables developers to fit very large language models into a single address space for experimentation, fine-tuning and evaluation without sharding across multiple GPUs. MSI plans to ship this as a 1.6 kW water-cooled workstation, quiet enough for an office yet packed with four M.2 NVMe slots and dual 400 GbE ports, so code and models can be developed locally and then pushed unchanged to full GB300/HDX deployments in the cloud.
For teams that don’t need GB300-class memory footprints, MSI also shows the EdgeXpert system built on NVIDIA’s GB10 Grace Blackwell “Spark” platform. This compact edge box pairs a 20-core Grace CPU with a smaller Blackwell GPU and 128 GB of unified memory, targeting about a petaFLOP of FP4 AI performance in a desktop-friendly form factor. It’s aimed at local prototyping, on-prem inference and edge deployments where developers want the same Grace-Blackwell software stack they use in the data center, but in a lower-power box that can sit under a desk. Seen together with the larger AI Station, the St. Louis SC25 booth story is really about giving AI teams a continuum from compact GB10 nodes through GB300 workstations up to full MGX server racks.
The tour closes on MSI’s broader server roadmap: OCP OpenRack V3 21-inch racks with 48 V busbars and centralized power shelves, plus DC-MHS host processor modules that let the same chassis accept Intel Xeon 6900/6700 or AMD Turin EPYC CPUs. Extended-volume air coolers (EVAC) allow them to air-cool 500 W CPUs with very low fan power, which matters once racks cross the 17 kW threshold. MSI’s representative notes that, outside NVIDIA’s Grace-based platforms, customer demand for ARM servers is still limited, so most of their modular boards focus on x86 today. But with MGX GPU nodes, Grace Blackwell workstations and OCP-ready compute sleds, MSI is clearly positioning itself as a scalable player in high-performance and AI computing rather than just a consumer PC brand.



