KR6288-E2
6U Extreme AI Server with HGX H200 8-GPU
- Powered by NVIDIA HGX H200 8-GPUs in 6U
- 2x AMD EPYC™ 9004
- Delivers 32 PFlops industry-leading AI performance
- Direct liquid cooling design available with over 80% cold plate coverage
KR6288-E2 with AMD EPYC™ 9004 processors is an advanced AI system supporting NVIDIA HGX H200 8-GPUs. This server delivers industry-leading 32 PFlops of AI performance and lightning-fast CPU-to-GPU interconnect bandwidth, with the H200 Transformer Engine supercharging training speeds for GPT large language models. Its optimized power efficiency and modular design with flexible configuration makes it ideal for the most demanding AI tasks in various scenarios like hyperscale data centers, AI model training, and metaverse workloads.
Unprecedented AI Performance
- Powered by NVIDIA HGX H200 8-GPUs in 6U, TDP up to 700W
- 2x AMD EPYC™ 9004 processors
- 32 PFlops of industry-leading AI performance
- H200 Transformer Engine delivers supercharged training speed for GPT large language models
Leading Architecture Design
- Lightning-fast CPU-to-GPU interconnect bandwidth
- Ultra-high scalable inter-node networking with up to 4.0 Tbps non-blocking bandwidth
- Optimized cluster-level architecture with 8:8:2 ratio of GPU to compute network to storage network
Optimized Energy Efficiency
- Low air-cooled heat dissipation overhead and high power efficiency
- 54V, 12V separated power supply with N+N redundancy reducing power conversion loss
- Direct liquid cooling design with over 80% cold plate coverage keeps PUE ≤1.15
Flexible Configurations for AI Scenarios
- Fully modular design and flexible configurations satisfy both on-premises and cloud deployment
- Easily harness large-scale model training, such as GPT-3, MT-NLG, stable diffusion and Alphafold
- Diversified SuperPod solutions accelerating the most cutting-edge innovation including AIGC, AI4Science and Metaverse
Resources
Specifications
Model | KR6288-E2 |
---|---|
Form Factor | 6U rack server |
Processor | 2x AMD EPYC™ 9004, TDP up to 400W |
Memory | Up to 24x 4800 MT/s DDR5 DIMM, RDIMM |
GPU | NVIDIA HGX H200 8-GPUs, TDP up to 700W |
Storage | 1. 8x NVMe U.2 2. 16x NVMe/SATA U.2 3. 8x NVMe U.2 + 16x SATA U.2 (RAID) |
OCP | 1x OCP 3.0, supports NCSI |
PCIe | 10x PCIe 5.0 x16 One PCIe 5.0 x16 slot can be replaced with two PCIe 5.0 x8 slots Optional support for Bluefield-3, CX7, and various SmartNICs |
Management | DC-SCM BMC management module with ASPEED 2600 |
Security | TPM 2.0 (Trusted Platform Module) |
Cooling | Air cooling Cold-plate liquid cooling |
PSU | 12V: 2700W/3200W CRPS (1+1) redundant 54V: 3200W CRPS (3+3) redundant |
Dimensions | 447mm (W) x 440mm (H) x 860mm (D) |
* All configurations are subject to change without notice