Intel's oneDNN 1.4 Brings More Performance Optimizations To This Deep Learning Library
Intel engineers have outed a new version of oneDNN, the library formerly known as DNNL and before that MKL-DNN for providing a deep neural network library geared for high performance deep learning applications. In aiming to live up to its name, oneDNN 1.4 has more performance optimizations.
The oneDNN 1.4 library release is the second release under the umbrella of being part of Intel's oneAPI toolkit.
The Friday release of oneDNN 1.4 has better performance for systems with SSE4.1 and AVX across a variety of operations, better performance of certain operations on all supported CPUs, and better performance of BFloat16 inner product for CPUs with Intel AVX-512 DL BOOST (VNNI).
With oneDNN also supporting the oneAPI stack with GPU acceleration, there are also various performance optimizations for running the new release with Intel graphics.
The oneDNN 1.4 release also has various new features, support for a CPU run-time threadpool, and other enhancements.
More details on the oneDNN 1.4 open-source deep learning library update via GitHub.
The oneDNN 1.4 library release is the second release under the umbrella of being part of Intel's oneAPI toolkit.
The Friday release of oneDNN 1.4 has better performance for systems with SSE4.1 and AVX across a variety of operations, better performance of certain operations on all supported CPUs, and better performance of BFloat16 inner product for CPUs with Intel AVX-512 DL BOOST (VNNI).
With oneDNN also supporting the oneAPI stack with GPU acceleration, there are also various performance optimizations for running the new release with Intel graphics.
The oneDNN 1.4 release also has various new features, support for a CPU run-time threadpool, and other enhancements.
More details on the oneDNN 1.4 open-source deep learning library update via GitHub.
Add A Comment