Re-Testing NVIDIA's Threaded OpenGL Optimizations For Linux Gaming

Written by Michael Larabel in Display Drivers on 22 June 2014 at 09:30 AM EDT. Page 1 of 5. 10 Comments.

Back in 2012 with the NVIDIA 310 Linux driver series a threaded OpenGL optimization was added to the proprietary graphics driver. When this driver premiered we tested NVIDIA's Linux threaded OpenGL optimizations to mixed results. We're back now re-testing the OpenGL threaded optimizations to see if it makes any more of a difference now with modern Linux games and OpenGL workloads while using the latest 337.25 Linux driver.

NVIDIA's OpenGL threaded optimization feature allows offloading the CPU computational workload to a separate processor thread. This feature is designed to benefit CPU-heavy workloads but can potentially worsen the performance depending upon the game/application's particular OpenGL calls. As a result, the threaded optimization feature remains disabled by default while it's been around for two years. For more information on the threaded optimization feature and how to enable it, see the earlier article.

With this article, a NVIDIA GeForce GTX 760 graphics card was used while testing it in its stock driver configuration and then enabling __GL_THREADED_OPTIMIZATIONS while re-running the same set of Linux-native OpenGL tests using the Phoronix Test Suite. There's Valve Source Engine games run and a whole lot more. The GTX 760 was running from an Ubuntu 14.04 64-bit system with the NVIDIA 337.25 binary driver.

NVIDIA Linux OpenGL Threaded Optimizations

Besides recording the actual result, the Phoronix Test Suite also was monitoring the reported CPU usage and GPU utilization using the exposed driver interfaces.


Related Articles