Blender 2.81 Benchmarks On 19 NVIDIA Graphics Cards - RTX OptiX Rendering Performance Is Incredible

Written by Michael Larabel in Software on 26 November 2019 at 07:46 AM EST. Page 1 of 3. 34 Comments.

Last week marked the release of Blender 2.81 with one of the shiny new features being the OptiX back-end for the Cycles engine to provide hardware-accelerated ray-tracing with NVIDIA RTX graphics processors. Long story short, OptiX is much faster for Blender than using NVIDIA's CUDA back-end -- which already was much faster than the OpenCL support within Blender. For your viewing pleasure today are benchmarks of 19 different graphics cards looking at the CUDA performance from Maxwell to Pascal to Turing and then for the RTX graphics cards also the OptiX performance.

OptiX is only supported with the NVIDIA RTX graphics cards but it offers a significant boost to the rendering performance. This NVIDIA-designed API for exploiting their RT cores introduced with Turing yields an impressive speed-up for Blender render times in common benchmark scenes. For more background information on OptiX with Blender 2.81 can be found via this Blender.org blog post from the summer.

Based upon the NVIDIA graphics cards I had available (thus all GeForce), the cards tested for this comparison were:

- GTX 970
- GTX 980
- GTX TITAN X
- GTX 1060
- GTX 1070
- GTX 1080
- GTX 1080 Ti
- GTX 1650
- GTX 1660
- GTX 1660 SUPER
- GTX 1660 Ti
- RTX 2060
- RTX 2060 SUPER
- RTX 2070
- RTX 2070 SUPER
- RTX 2080
- RTX 2080 SUPER
- RTX 2080 Ti
- TITAN RTX

Benchmarks were done using the CUDA back-end and then the OptiX back-end where supported. OpenCL wasn't tested since as already shown the CUDA back-end itself is much faster than the OpenCL capabilities of Blender.

Blender 2.81 CUDA vs. OptiX Performance

Tests were done on Ubuntu 19.10 while running the NVIDIA 440.31 driver stack. All benchmarks were, of course, done using the Phoronix Test Suite.


Related Articles