top of page

Availability

NVIDIA's high-performance data center GPUs include the H200, H100, and GB200, designed for AI, HPC, and cloud workloads. The H200 builds on the H100 with improved memory bandwidth and capacity, making it more efficient for large-scale AI models. The H100, based on the Hopper architecture, remains a key choice for AI training and inference, offering powerful tensor cores and transformer engine optimizations. The GB200, part of NVIDIA's Blackwell architecture, is set to push AI computing even further, combining GPUs with Grace CPUs for enhanced efficiency in generative AI and large-scale computations.

h200-tensor-og.jpg

32 x H200 nodes available (256 GPUs)

6 x H100 nodes available (48 GPUs)

h100-og.jpg

GB200s available early fall

bottom of page