Intel Opens Up nGraph Source Code For DNN Model Compiler
Intel tonight announced they are open-sourcing their nGraph compiler code, which serves as a framework-neutral deep neural network model compiler.
Intel claims with nGraph and Xeon Scalable hardware that researchers can obtain up to 10x performance improvements over previous TensorFlow integrations, as one example. Besides TensorFlow, nGraph also supports PyTorch, MXNet, Neon, Caffe2, and CNTK while also planning to support other frameworks moving forward.
The nGraph code claims to allow intelligent training across CPU, NNP, and GPU hardware without requiring the creation of new libraries.
I don't have any experience with nGraph up until now but will check it out to see if it can be spun for any interesting benchmark test cases. Those wanting to learn more about the now open-source nGraph can visit ai.intel.com. This deep learning code is available under the Apache 2.0 license.
Intel claims with nGraph and Xeon Scalable hardware that researchers can obtain up to 10x performance improvements over previous TensorFlow integrations, as one example. Besides TensorFlow, nGraph also supports PyTorch, MXNet, Neon, Caffe2, and CNTK while also planning to support other frameworks moving forward.
The nGraph code claims to allow intelligent training across CPU, NNP, and GPU hardware without requiring the creation of new libraries.
I don't have any experience with nGraph up until now but will check it out to see if it can be spun for any interesting benchmark test cases. Those wanting to learn more about the now open-source nGraph can visit ai.intel.com. This deep learning code is available under the Apache 2.0 license.
6 Comments