NVIDIA Doubles TX1 Deep Learning Speed & Efficiency With Software Upgrade
For those making use of the exciting Jetson TX1 platform, NVIDIA is reporting they've now managed to"make it twice as fast and efficient" with their JetPack developer tools upgrade.
JetPack 2.3 was announced on Monday and this bundle of software tools/libraries/etc now includes TensorRT as a deep-learning inference engine, cuDNN 5.1 as the newest of this CUDA-accelerated library, new multimedia APIs, and bundles in CUDA 8 while updating the host compiler support to GCC5.
With the availability of JetPack 2.3, results published by NVIDIA on the Jetson TX1 developer board show the deep learning energy efficiency to be much greater with this newest software release, at least for GoogLeNet with batch sizes greater than one.
I'll try to find the time to try some fresh JetPack 2.3 benchmarks with my deep learning tests on the Tegra X1 developer board shortly. Those interested in more details can see this NVIDIA blog post.
JetPack 2.3 was announced on Monday and this bundle of software tools/libraries/etc now includes TensorRT as a deep-learning inference engine, cuDNN 5.1 as the newest of this CUDA-accelerated library, new multimedia APIs, and bundles in CUDA 8 while updating the host compiler support to GCC5.
With the availability of JetPack 2.3, results published by NVIDIA on the Jetson TX1 developer board show the deep learning energy efficiency to be much greater with this newest software release, at least for GoogLeNet with batch sizes greater than one.
I'll try to find the time to try some fresh JetPack 2.3 benchmarks with my deep learning tests on the Tegra X1 developer board shortly. Those interested in more details can see this NVIDIA blog post.
Add A Comment