Intel MKL-DNN/DNNL 1.2 Released With Performance Improvements For Deep Learning On CPUs

Written by Michael Larabel in Intel on 1 February 2020 at 02:37 AM EST. Add A Comment
INTEL
Intel on Friday released Deep Neural Network Library (DNNL) version 1.2, formerly known as MKL-DNN. With this release comes both new features and better performance.

On the performance front, Intel DNNL 1.2 brings better int8 inference on pre-AVX512 hardware while int8 inference is also boosted for 3D spatial data on all CPUs. Int8 inference is also supported on GPUs with this release. There is also better performance on DNNL 1.2 when it comes to 1D backward convolutions.

Intel DNNL 1.2 also introduces a general purpose matrix-matrix multiplication primitive and a variety of other primitives.

Downloads and more details on the Deep Neural Network Library 1.2 via GitHub. Fresh DNNL 1.2 benchmarks coming up soon on Phoronix.
Related News
About The Author
Michael Larabel

Michael Larabel is the principal author of Phoronix.com and founded the site in 2004 with a focus on enriching the Linux hardware experience. Michael has written more than 20,000 articles covering the state of Linux hardware support, Linux performance, graphics drivers, and other topics. Michael is also the lead developer of the Phoronix Test Suite, Phoromatic, and OpenBenchmarking.org automated benchmarking software. He can be followed via Twitter, LinkedIn, or contacted via MichaelLarabel.com.

Popular News This Week