Aivres Blog

Aivres Addresses Varied Needs for Rack-Scale Liquid Cooling in Tomorrow’s AI Data Centers

Multiple interrelated trends are compounding the challenges of constructing artificial intelligence (AI) data centers that global industry will require in the coming years:

  • Popularity of emergent AI-generated content (AIGC) business models
  • Tremendous business needs of large models
  • Sudden increase in information technology (IT) density
  • Increase in energy consumption

Given that AI advancements demand more and ever-higher-performance hardware, computing power and power consumption per unit space of the intelligent computing center are rising. And they are not about to smooth out any time soon—AI data centers will continue to grow only bigger, denser and more numerous for the foreseeable future.

With the cooling needs of AI data centers rapidly evolving as well, liquid cooling alone delivers the lasting improvements in energy efficiency that tomorrow’s AI data centers desperately need.

The swifter heat dissipation offered by liquid cooling technology ensures optimal operating temperatures for servers and components, enhancing performance and prolonging hardware lifespan. Liquid cooling demands five to 10 times less space than traditional air-cooling methods, allowing for more efficient space utilization and higher server density within AI data center environments. Furthermore, in a world increasingly conscious of environmental impact, liquid cooling delivers a highly appealing approach to significantly reduced power consumption and carbon footprint relative to air cooling, for optimal sustainability.

Liquid-cooled Rack-scale Aivres KRS8000 Based on NVIDIA GB200 NVL72

Aivres liquid-cooled solutions already are proven for boosting the effectiveness and sustainability of AI data centers. The Aivres KRS8000, based on the NVIDIA GB200 NVL72 platform, for example, leverages the large NVIDIA NVLink(™) domain architectures and liquid cooling to create a single, massive 72-GPU rack that overcomes communication bottlenecks.

The NVIDIA GB200 NVL72 connects 36 NVIDIA Grace(™) CPUs and 72 NVIDIA Blackwell GPUs. The GB200 Grace Blackwell Superchip also is a key component of the Aivres KRS8000 solution, connecting two high-performance NVIDIA Blackwell GPUs and an NVIDIA Grace CPU with the NVIDIA NVLink-C2C interconnect. The NVIDIA Blackwell architecture introduces groundbreaking advancements specifically crucial to ongoing rollout of generative AI and accelerated computing. The incorporation of the second-generation Transformer Engine, alongside the faster and wider NVIDIA NVLink interconnect, propels the data center into a new era, with orders of magnitude more performance compared to the previous architecture generation. To that point, GB200 NVL72 delivers 30 times faster real-time trillion-parameter large language model (LLM) inference.

The liquid-cooled Aivres KRS8000 rack based on the NVIDIA GB200 NVL72 simultaneously increases compute density, reduces the amount of floor space used in AI data centers and facilitates high-bandwidth, low-latency GPU communication. At the same time, the rack is optimized for energy efficiency; compared to NVIDIA H100 air-cooled infrastructure, GB200 NVL72 delivers 25 times the performance at the same power, while reducing water consumption.

Additional Aivres Solutions for Rack-Scale Liquid Cooling

Aivres supports several other configurations of liquid-cooled AI racks based on high performance data center AI building blocks, including the 6U KR6288 NVIDIA HGX™ H200 8-GPU server and the 5U KR5288 NVIDIA HGX™ B200 NVL8 server.

The KR5288 rack comprises 8 HGX B200 servers making up a total of 64 of the industry’s most advanced GPUs. To achieve high efficiency liquid cooling throughout the rack, each KR5288 enables direct-to-chip liquid cooling for each of two CPU modules and four GPU modules. The manifold for cooling supply and return is mounted vertically at the rear of the rack, which features 24 outlets for connecting 48 cooling hoses to transfer coolant to and from direct-liquid-cooling servers. Each server has six quick disconnects—two for CPU modules and the other four for GPU modules.

In addition, Aivres offers a wide range of coolant distribution units (CDUs) to address varied needs and specifications of different AI data centers.

For highest cooling capacity, there is the liquid-to-liquid, megawatt-class, in-row CDU offering 1.3-megawatt cooling capacity, an approach temperature of 45 degrees Celsius (with a highest allowable temperature of 60 degrees) and a flow rate of more than or equal to 1,200 per minute. The benefits are far-reaching:

  • High efficiency—It’s a Class 1 energy-efficient circulation pump with an overall efficiency ratio close to 100.
  • Safety—Both the primary and secondary sides of the CDU are equipped with filters, and solution monitoring can be optionally configured to prioritize safety.
  • Compatibility—The Aivres CDU leverages a bypass system to adjust liquid supply within the full load range. This renders the solution fully adaptable to fluctuations in load quantity.
  • Intelligence—It supports both secure communication and remote parameter configuration, and more comprehensive intelligent support can be added.

For AI data centers needing a solution of less footprint and construction and no requirement for an external chiller (cooling tower), Aivres also offers a liquid-to-air, standalone, in-row CDU. This solution offers cooling capacity of 100 to 240 kilowatts, an approach temperature of 15 degrees Celsius, integrated sensors (flow, pressure, temperature, humidity, coolant level and leak detection), and field-replaceable fans, pumps, piping and sensors.

Aivres Rack-Scale, Liquid-Cooled AI Solutions at GTC

At the NVIDIA GTC 2025 developer conference, scheduled for March 17–21 in San Jose, California, Aivres is set to showcase three rack-scale, liquid-cooled solutions:

  • KRS8000 based on NVIDIA GB200 NVL72
  • Rack based on KR6288 NVIDIA HGX™ H200 8-GPU server
  • Rack based on KR5288 NVIDIA HGX™ B200 NVL8server

Development trends around AI data centers all point to growing deployment of liquid cooling. Analysis released in August 2024 by Research and Markets projects a compound annual growth rate (CAGR) of 27.6 percent from 2024 to 2030 for the global market for liquid cooling in data centers. “This growth is attributed to a variety of influencing factors,” reports Research and Markets. “Rapid digital transformation across various industries is creating a need for enhanced data processing and storage solutions. The growing adoption of cloud computing, big data analytics, and artificial intelligence requires infrastructure capable of managing increased heat output efficiently.”

The range of Aivres liquid-cooled solutions improve effectiveness and sustainability, offering developers of AI data centers a dependable foundation for growth in a space characterized by rapid change and new challenges.

Leave a Reply

Your email address will not be published. Required fields are marked *