NVIDIA GeForce RTX 2060 Linux Performance From Gaming To TensorFlow & Compute

Written by Michael Larabel in Graphics Cards on 8 January 2019 at 09:30 AM EST. Page 7 of 9. 35 Comments.
NVIDIA GeForce RTX 2060 Linux Benchmarks

Next up is the plethora of GPU compute benchmarks for the various NVIDIA graphics cards. The Radeon cards weren't tested due to notably hitting some issues with TensorFlow on ROCm 2.0 while also working on more tests there for a future article.

NVIDIA GeForce RTX 2060 Linux Benchmarks

Right out of the date with ResNet-50 at FP16 precision where Turing's tensor cores come into play, the RTX 2060 easily blasted past the GTX 1080. While in the gaming tests the RTX 2060 and GTX 1080 offered similar performance, under TensorFlow in this first benchmark was a 25% advantage over the GTX 1080 for this new $349 graphics card. The performance is more than doubled that of the GTX 1060. (Unfortunately, the GTX 960/970 with their limited vRAM had troubles running the TensorFlow benchmarks, so no comparison to Maxwell here.)

NVIDIA GeForce RTX 2060 Linux Benchmarks
NVIDIA GeForce RTX 2060 Linux Benchmarks

On a performance-per-Watt basis, the RTX 2060 comes in at 26% better than the GTX 1080 Ti.

NVIDIA GeForce RTX 2060 Linux Benchmarks

With AlexNet FP16, the RTX 2060 comes in easily between the GTX 1080 and GTX 1080 Ti.

NVIDIA GeForce RTX 2060 Linux Benchmarks
NVIDIA GeForce RTX 2060 Linux Benchmarks

The performance-per-Watt here is 13% better than the GTX 1080 Ti.

NVIDIA GeForce RTX 2060 Linux Benchmarks
NVIDIA GeForce RTX 2060 Linux Benchmarks

With AlexNet using FP32 precision, the RTX 2060 performance comes in around the GTX 1080 performance level with the tensor cores no longer being utilized and power efficiency on par with the GTX 1080.


Related Articles