- Powered by NVIDIA HGX H200 8-GPUs in a 6U chassis
- 2x 4th Gen Intel® Xeon® Scalable
- Delivers 32 PFlops industry-leading AI performance
- Direct liquid cooling design available with over 80% cold plate coverage
KR6288-X2 with 4th Gen Intel® Xeon® Scalable processors is an advanced AI system made for hyperscale data centers, delivering high performance with NVIDIA HGX H200 8-GPUs. This server delivers industry-leading 32 PFlops of AI performance and lightning-fast CPU-to-GPU interconnect bandwidth, with the H200 Transformer Engine supercharging training speeds for GPT large language models. Its optimized power efficiency and modular design with flexible configuration makes it ideal for the most demanding AI tasks in various scenarios like hyperscale data centers, AI model training, and metaverse workloads.
Unprecedented AI Performance
- Powered by NVIDIA HGX H200 8-GPUs
- 2x 4th Gen Intel® Xeon® Scalable processors
- 32 PFlops of industry-leading AI performance
- H200 Transformer Engine delivers supercharged training speed for GPT large language models
Leading Architecture Design
- Lightning-fast CPU-to-GPU interconnect bandwidth
- Ultra-high scalable inter-node networking with up to 4.0 Tbps non-blocking bandwidth
- Optimized cluster-level architecture with 8:8:2 ratio of GPU to compute network to storage network
Optimized Energy Efficiency
- Low air-cooled heat dissipation overhead and high power efficiency
- 54V, 12V separated power supply with N+N redundancy reducing power conversion loss
- Direct liquid cooling design with over 80% cold plate coverage keeps PUE ≤1.15
Flexible Configurations for AI Scenarios
- Fully modular design and flexible configurations satisfy both on-premises and cloud deployment
- Easily harness large-scale model training, such as GPT-3, MT-NLG, stable diffusion and Alphafold
- Diversified SuperPod solutions accelerating the most cutting-edge innovation including AIGC, AI4Science and Metaverse
Resources
Specifications
Model | KR6288-X2 |
---|---|
Form Factor | 6U rack server |
Processor | 2x 4th/5th Gen Intel® Xeon® Scalable Processors, up to 350W TDP |
Memory | Up to 32x 5600 MT/s DDR5 DIMM, RDIMM |
GPU | NVIDIA HGX H200 8-GPUs, TDP up to 700W |
Storage | 8x NVMe U.2 16x NVMe/SATA U.2 8x NVMe U.2 + 16x SATA U.2 (RAID) |
Network Interface | 1x OCP 3.0, supports NCSI |
PCIe | 10x PCIe 5.0 x16 One PCIe 5.0 x16 slot can be replaced with two PCIe 5.0 x8 slots Optional support for Bluefield-3, CX7, and various SmartNICs |
Cooling | Air cooling Cold-plate liquid cooling |
Management | DC-SCM BMC management module with ASPEED 2600 |
Security | TPM 2.0 (Trusted Platform Module) |
Power Supply | 12V: 2700W/3200W CRPS (1+1) redundant 54V: 3200W CRPS (3+3) redundant |
Dimensions | 482mm (W) x 263mm (H) x 855mm (D) |
* All configurations are subject to change without notice