FFmpeg Now Supports GPU Inference With Intel's OpenVINO

Written by Michael Larabel in Multimedia on 20 September 2020 at 09:41 AM EDT. 2 Comments
MULTIMEDIA
Earlier this summer Intel engineers added an OpenVINO back-end to the FFmpeg multimedia framework. OpenVINO as a toolkit for optimized neural network performance on Intel hardware was added to FFmpeg for the same reasons there is TensorFlow and others also supported -- support for DNN-based video filters and other deep learning processing.

The support added back in July for FFmpeg with OpenVINO is opt-in under the --enable-libopenvino build switch and requires first building OpenVINO with its C API enabled. This Intel inference engine supports TensorFlow, Caffe, ONNX, MXNet, and more that can be converted into OpenVINO format.

What's new this past week is the code landing with the OpenVINO DNN back-end in FFmpeg to support inference on Intel GPUs.

Details on setting up FFmpeg with the OpenVINO GPU inference support can be found via this commit. The default behavior for now with FFmpeg OpenVINO support is CPU-based inference.
Related News
About The Author
Michael Larabel

Michael Larabel is the principal author of Phoronix.com and founded the site in 2004 with a focus on enriching the Linux hardware experience. Michael has written more than 20,000 articles covering the state of Linux hardware support, Linux performance, graphics drivers, and other topics. Michael is also the lead developer of the Phoronix Test Suite, Phoromatic, and OpenBenchmarking.org automated benchmarking software. He can be followed via Twitter, LinkedIn, or contacted via MichaelLarabel.com.

Popular News This Week