NVIDIA May Be Trying To Prevent GeForce GPUs From Being Used In Data Centers

Written by Michael Larabel in NVIDIA on 25 December 2017 at 10:49 AM EST. 88 Comments
NVIDIA
Making the rounds on the Internet this holiday weekend is an updated NVIDIA GeForce software license agreement prohibiting the use of their drivers in data-center deployments for consumer GPUs.

This driver download license agreement for the GeForce drivers mention, "No Datacenter Deployment. The SOFTWARE is not licensed for datacenter deployment, except that blockchain processing in a datacenter is permitted."

This is just in regards to the GeForce support. For data-centers, NVIDIA clearly would prefer selling their higher-priced Quadro and Tesla accelerators for render farms, deep learning data-centers, CUDA compute clusters, etc. Some have preferred buying up the lower-cost GeForce consumer-grade graphics cards for offering good performance at significant savings compared to the workstation/data-center products.

But this license agreement is overlay vague about how NVIDIA defines a "datacenter deployment" if this would also count as university research clusters, etc. The only explicit exception is for crypto-currency/blockchain farms. NVIDIA hasn't provided any official comment yet given the holiday weekend.

It may also be hard for NVIDIA to enforce this license agreement if they so choose, especially as depending upon how you go about downloading your NVIDIA (Linux / Windows / BSD) driver, you can potentially bypass this software license agreement. On some routes you are redirected to an agreement where this data-center deployment clause does not exist. Presumably they may just be trying to use this clause to block large orders of GeForce GPUs where they know the order is destined for data-centers as enforcement otherwise would be very difficult.


Some have pointed out that, yes, indeed this is a benefit of the AMD Linux open-source driver stack that there are no clauses on their driver's usage for Radeon consumer products or the like. However, the current Radeon Linux graphics stack isn't an immediate solution for many workstation/data-center customers as many scientific workloads remain written against CUDA (and GPUOpen's HIP translator for CUDA code is still a work-in-progress), the code being otherwise unoptimized or less performant currently for AMD GPUs, or simply because the ROCm-based stack is still working its way to be more uniformly supported across the Linux landscape. In 2018 we should reach a point where the ROCm/OpenCL components can be more easily deployed across all current Linux distributions than just the select enterprise Linux platforms where it's officially supported, etc, to make it much easier to run with AMD's open-source driver stack for compute purposes.

Anyhow, we'll see if NVIDIA clarifies their documentation on data-center usage or makes any other comments in the days ahead.
Related News
About The Author
Michael Larabel

Michael Larabel is the principal author of Phoronix.com and founded the site in 2004 with a focus on enriching the Linux hardware experience. Michael has written more than 20,000 articles covering the state of Linux hardware support, Linux performance, graphics drivers, and other topics. Michael is also the lead developer of the Phoronix Test Suite, Phoromatic, and OpenBenchmarking.org automated benchmarking software. He can be followed via Twitter, LinkedIn, or contacted via MichaelLarabel.com.

Popular News This Week