Intel Publishes Whitepaper On New BFloat16 Floating-Point Format For Future CPUs

Written by Michael Larabel in Intel on 14 November 2018 at 05:00 PM EST. 14 Comments
INTEL
Intel has published their initial whitepaper on BF16/BFloat16, a new floating point format to be supported by future Intel processors.

The BFloat16 format "BF16" is intended for Deep Learning Boost for assisting the performance of DL workloads. BF16 is faster than FP16 for deep learning and associated workloads in that it doesn't support denormals, hardware exception handling isn't needed, etc. BF16 is going to be implemented in the hardware and can be used with FMA units.

Other early details on Intel BF16 can be found via this whitepaper. For Xeon CPUs this new FP format is expected to be added for Cooper Lake, the generation after next year's Cascade Lake.

Intel has previously indicated BFloat16 will be supported by Nervana processors, FPGAs, and other deep learning focused hardware like Google's TPUs.
Related News
About The Author
Michael Larabel

Michael Larabel is the principal author of Phoronix.com and founded the site in 2004 with a focus on enriching the Linux hardware experience. Michael has written more than 20,000 articles covering the state of Linux hardware support, Linux performance, graphics drivers, and other topics. Michael is also the lead developer of the Phoronix Test Suite, Phoromatic, and OpenBenchmarking.org automated benchmarking software. He can be followed via Twitter, LinkedIn, or contacted via MichaelLarabel.com.

Popular News This Week