Industry-Leading AI Technology on Cutting-Edge Platforms
Aivres and NVIDIA are working together to develop solutions to help businesses harness the power of AI. Through integration and innovation with NVIDIA’s leading-edge AI technology, Aivres server platforms deliver breakthrough performance, efficiency, and flexibility to tackle the most challenging AI workloads in the industry today – large models, machine learning, generative AI, and beyond.
Broad portfolio of optimized servers for inference, training, LLM, Gen-AI, digital twins
Up to 8 of the latest NVIDIA GPUs in 6U with advanced interconnect for breakthrough speed
Best-in-class among competitors in definitive AI training and inference benchmarks
For Extra-Large Language Models, Simulation, and Data Analytics
Aivres platforms integrating advanced NVIDIA AI architectures including GB200 NVL72 and HGX H200 to deliver extreme supercomputing performance to support trillion-parameter language models and other compute-intensive AI workloads.
GB200 NVL72 Rack-Scale Solution
Exascale Rack for Trillion-Parameter LLM Inference
- Blackwell rack-scale architecture connects 72 Blackwell GPUs via NVIDIA® NVLink®
- 130 TB/s of low-latency communication bandwidth
- Acts as a single massive GPU for efficient processing
- 4X faster training for large language models using FP8 precision
- Up to 800 GB/s decompression throughput
- Achieves 18X faster performance for database query benchmarks vs traditional CPUs
- 8 TB/s high memory bandwidth
- Grace CPU NVLink-C2C interconnect ensures high-speed data transfer

NVIDIA HGX H200 8-GPU Server
6U Modular AI Pods for Extra-Large Scale AI Training Models
- Supports NVIDIA HGX H200 8-GPU
- 6U modular form factor for data center deployment
- Unified GPU modules for heterogeneous computing, deploy and scale as needed
- Lossless scalability with PCIe 5.0 slots for NDR 400Gb/s InfiniBand
- 300TB massive local storage with 24x 2.5-inch SSDs, up to 16x NVMe
- Delivers 32 PFLOPS industry-leading AI performance

NVIDIA H200 NVL Tensor Core Server
6U PCIe AI Server for Large Scale Parallel Processing
- Supports 8x NVIDIA H200 NVL GPUs via PCIe 5.0
- 6U modular form factor for data center deployment
- Unified GPU modules for heterogeneous computing, deploy and scale as needed
- Up to 24x 2.5-inch SSDs, 16x E3.S

NVIDIA HGX Blackwell 8-GPU Server
10U High-Performance AI Training Server
- Supports NVIDIA HGX B200A 8-GPU
- Modular design for easy deployment and maintenance
- Supports up to 12x full-height expansion cards
- Supports cold-plate liquid cooling on 2x NVSwitch, 2x CPU

For Generative AI, Visualization, and Advanced Graphics
Aivres enterprise AI platforms allow users to unlock the next level of multi-workload performance, combining powerful AI computing with best-in-class graphics and media acceleration for next-generation data center workloads, from generative AI and large language model (LLM) inference and training to 3D graphics, rendering, and video.
4U PCIe 5.0 8-GPU Server
Multi-config System for Gen-AI to Training and Inference
- Supports NVIDIA L40S, H100 GPUs
- 8 FHFL double-width GPU in 4U
- Supports additional 2x double-width PCIe GPU, 1x FL single-width PCIe GPU
- PCIe 5.0 architecture supporting E3.S
- CXL1.1 supports storage-level memory expansion
- Flexible topologies and configurations to support various applications

2U PCIe 5.0 4-GPU Server
Flexible Mainstream Platform for Enterprise AI
- Supports NVIDIA L40S, H100 GPUs
- Supports 4 double-width or 8 single-width PCIe GPUs
- CXL1.1 supports storage-level memory expansion
- Versatile 2.5”, 3.5”, and E3.S all-flash storage options
- Cold-plate liquid cooling for enhanced energy-efficiency
