NVIDIA HGX-2 HPC/AI Server Platform Offers 16 x V100 GPUs, 2 PFLOPS of Tensor Cores
The HGX-2 is an impressive beast, but will cost an incredible amount too.
NVIDIA has introduced the HGX-2 as their newest and most advanced cloud server platform intended for HPC and API workloads. The HGX-2 offers an incredible sixteen V100 (Volta) GPUs connected via NVLink yielding 2 PetaFLOPS worth of tensor cores, 512GB of video memory, and 2,400GB/s of bisection bandwidth. This more than doubles the performance capabilities of the previous HGX-1 server platform.
NVIDIA claims that the HGX-2 can replace up to 300 CPU-only servers for some machine/deep learning scenarios. Those wanting to learn more about the NVIDIA HGX-2 can do so at NVIDIA.com.
NVIDIA has introduced the HGX-2 as their newest and most advanced cloud server platform intended for HPC and API workloads. The HGX-2 offers an incredible sixteen V100 (Volta) GPUs connected via NVLink yielding 2 PetaFLOPS worth of tensor cores, 512GB of video memory, and 2,400GB/s of bisection bandwidth. This more than doubles the performance capabilities of the previous HGX-1 server platform.
NVIDIA claims that the HGX-2 can replace up to 300 CPU-only servers for some machine/deep learning scenarios. Those wanting to learn more about the NVIDIA HGX-2 can do so at NVIDIA.com.
14 Comments