Press Release
Tuesday, October 14, 2025

Aivres at OCP Global Summit 2025 to Showcase “Total Infrastructure Solutions to Power the AI Era”

Aivres Integrates NVIDIA Technology to Drive Innovations Supporting Every Scale of AI Workload—from Enterprise Applications to AI Factories

SAN JOSE, California—October 14, 2025— Aivres, a leading global data center and cloud computing solutions provider, announced that the company at OCP Global Summit 2025 will showcase “Total Infrastructure Solutions to Power the AI Era.” Aivres will demonstrate a range of its liquid-cooled data center solutions and accelerated AI platforms in Booth A60. The company is an Emerald Sponsor of OCP Global Summit 2025, which continues through October 16 at the San Jose Convention Center.

With sustained focus on delivering total AI infrastructure solutions, Aivres continues its collaboration with NVIDIA to drive innovations and transformative solutions that integrate industry-leading AI technology on cutting-edge platforms to support agentic AI, physical AI, and HPC workloads at every scale, from enterprise applications to AI factories.

At OCP Global Summit 2025, Aivres will demonstrate its end-to-end design, integration and deployment capabilities for full-scale, liquid-cooled data center architecture to support the growing needs of AI factories in your data centers. The showcase includes two complete, liquid-cooled rack solutions:

KRS8500V3 Liquid-Cooled Exascale AI Rack Solution based on NVIDIA GB300 NVL72—Compared to NVIDIA GB200 NVL72, the Aivres KRS8500V3 achieves up to 1.5x-faster training and about 1.4x-higher inference throughput, establishing new MLPerf (machine-learning performance) records and redefining the standard for hyperscale AI computing. Designed for trillion-parameter large language model (LLM) training and inference, KRS8500V3 delivers unprecedented performance density, scalability, and energy efficiency, setting the foundation for large-scale AI, high-performance computing (HPC) and data analytics in data centers worldwide. It is a next-generation L11 rack-scale platform, integrating 36 NVIDIA Grace CPUs and 72 NVIDIA Blackwell Ultra GPUs into a fully liquid-cooled architecture.

KR5288 Liquid-Cooled 5U AI Server with NVIDIA Blackwell Ultra—The Aivres KR5288 supports the NVIDIA HGX B300 8-GPU platform in five rack units (5U) for extreme performance density, and it provides over 80% server liquid-cooling coverage with direct-to-chip cold-plate cooling for CPU, GPU and NVLink Switch. An AI rack with KR5288 can be scaled up to eight systems supporting 64 Blackwell Ultra GPUs, delivering the highest GPU density with efficient liquid cooling. The Aivres solution is a high-performance heterogeneous accelerated server, featuring a fully modular design and supporting an 8-GPU NVLink module based on the latest NVIDIA Blackwell Ultra platform.

Aivres at OCP Global Summit 2025 will also showcase next-generation use case focused  AI servers featuring NVIDIA’s latest accelerated computing platforms:

KR9288 10U AI Training Server with NVIDIA Blackwell Ultra—The Aivres KR9288 delivers flexible IO scalability and versatile performance with up to 12 full-height PCIe expansions (including 10 DPUs) and up to 10 hard drives. With 20 fans for high-powered air cooling and a modular 10U chassis design, the system delivers energy efficient high performance alongside ease of installation, operation and maintenance, offering a scalable AI rack solution for traditional air-cooled data centers. KR9288 features an NVIDIA HGX B300 8-GPU NVLink module based on the latest NVIDIA Blackwell Ultra.

KR6268 6U NVIDIA RTX PRO Server supporting NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs—The Aivres KR6268 server delivers efficient performance and high-bandwidth memory for advanced graphics and visual computing, agentic AI, physical AI, enterprise applications, scientific computing, etc. The system supports eight NVIDIA PCIe GPUs including the latest NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs to provide enterprises with cost- and energy-efficient universal acceleration for a wide range of applications.

Aivres offers AI platforms for every scenario: inference, training, LLMs, generative AI, digital twins, real-time graphics, etc. By enabling workloads at any scale, the broad product portfolio provides data centers with flexible AI building blocks to rack-scale supercomputing clusters for enterprise AI to the largest training models. The industry-leading performance offered by Aivres makes the company’s solutions best-in-class for training and inference benchmarks.