FP4 AI Performance
Breakthrough Performance for Enterprise AI and LLMs in the Data Center
The KR6268 AI server supports 8 NVIDIA PCIe GPUs, including NVIDIA H200 NVL and the latest NVIDIA RTX PRO™ 6000 Blackwell Server Edition, delivering powerful computing capabilities and high-bandwidth memory well-suited for a wide range of applications requiring large-scale parallel computing and data processing.
The Universal Data Center Platform for Enterprise AI
Accelerate enterprise workloads from agentic AI and LLM inference to industrial AI and digital twins with up to eight NVIDIA RTX PRO™ 6000 Blackwell Server Edition GPUs delivering breakthrough performance and energy efficiency of the NVIDIA Blackwell architecture.
Applications Supported
Available Models
Model | KR6268-X3 |
---|---|
Form Factor | 6U rack server |
Processor | 2x Intel ® Xeon® 6 processors (6500P/6700P-Series 2S), TDP up to 350W |
Memory | 32x DDR5 6400MT/s DIMMs |
GPU | 8x PCIe GPUs Supports NVIDIA RTX PRO™ 6000 Blackwell Server Edition, NVIDIA H200 NVL |
Storage | 12x 3.5’’/2.5’’ SAS 3.0/SATA HDD/NVMe 24x 2.5’’ SAS 3.0/SATA HDD Up to 16x E3.S/NVMe |
PCIe | 8x double-width FHHL PCIe 5.0 GPU + 2x PCIe 5.0 x16 slots + 1x PCIe 5.0 x8 slots OR 5x PCIe 5.0 x16 slots + 1x internal PCIe 5.0 x16 slot |
Cooling | Air cooling |
Fans | 15x 8086 hot-swap fans (10x internal, 5x external) |
Management | DC-SCM module with AST2600 |
Power Supply | 3200W dual-input 265mm PSU 3000W, 2700W, 2200W single-input 265mm PSU Supports 2+2 / 3+1 / 4+4 / 4+1 Redundancy Supports N+N, N+1, and non-redundant PSU configurations (switchable via command-line) Default setting is N+N |
Dimensions | 482mm (W) x 263mm (H) x 880mm (D) |