DDN positions itself as a data infrastructure backbone for both traditional HPC and large-scale AI, drawing on more than two decades of building supercomputers with research labs, national centers and partners like NVIDIA. In this interview, Jason Brown explains how the company has evolved into a “data intelligence platform” vendor, powering GPU-dense environments from on-prem clusters to AI factories and NeoCloud providers, with a focus on high throughput, low latency and predictable scaling rather than just raw storage capacity. https://www.ddn.com/products/ddn-enterprise-ai-hyperpod/
—
HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.
—
A big part of the discussion is about AI cloud providers that operate as GPU gigafactories: CoreWeave, G42, Lambda, Scaleway and others renting GPU instances instead of generic IaaS. These environments are hitting limits not just on budget but on power, cooling and data-center footprint, so DDN optimizes for performance per watt and per rack by keeping GPUs fed from storage instead of sitting idle waiting for data. Some customers are already generating on the order of a petabyte of data per day from AI pipelines, which forces a rethink of IO patterns, metadata handling and data locality across the entire stack rather than only tuning compute.
The new DDN Enterprise AI HyperPOD is presented as a turnkey RAG and inference appliance built jointly with NVIDIA and Supermicro, essentially a pre-integrated AI data platform you roll into the rack and power on. Under the hood it combines NVIDIA RTX-class GPUs (moving toward RTX PRO 6000 Blackwell Server Edition and BlueField-3 DPUs), NVIDIA AI Enterprise services like NIM and NeMo, Supermicro AI-optimized servers and DDN’s Infinia object-scale software. Configurations span from extra-small four-GPU systems with ~0.5 PB up to 256 GPUs with over 12 PB in a single rack, giving enterprises and sovereign AI clouds a modular way to scale RAG, agentic workloads and high-throughput inference without building the pipeline themselves
Brown then ties HyperPOD back into the broader DDN Data Intelligence Platform, which unifies EXAScaler-based file systems and Infinia object storage, and is now delivered through appliances like AI400X3 and Infinia 2.x. These systems are tuned to keep GPUs 95–99% utilized by accelerating ingestion, metadata operations and KV-cache stages, rather than letting data stalls waste expensive accelerators. Features such as multi-tenant isolation, observability hooks, and integration with Spark, Hadoop and cloud services (like Google Cloud Managed Lustre) are framed as necessary plumbing so the same infrastructure can support both HPC simulation and large-scale AI training, analytics and inference on a shared platform.
Filmed at the SC25 Supercomputing conference in St Louis, the video also walks past a Supermicro HGX SuperPOD-style AI factory rack, illustrating how DDN storage slots into NVIDIA-aligned reference architectures for clusters with thousands of GPUs. At the booth, DDN demos a full RAG pipeline showing how documents flow through Infinia into an inference service, as well as a financial-services analytics demo that ingests live market and news data to generate insights in real time. The takeaway is that organizations already running DDN for HPC research or simulation can repurpose the same data platform to stand up RAG, LLM inference and other AI workloads, turning existing supercomputing environments into AI factories with consistent data management and IO behavior across deployments
I’m publishing about 90+ videos from Embedded World North America 2025, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST. Join https://www.youtube.com/charbax/join for Early Access to all 90 videos (once they’re all queued in next few days) Check out all my Embedded World North America videos in my Embedded World playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga



