BFloat16 Support About To Land Within LLVM

Written by Michael Larabel in LLVM on 13 May 2020 at 08:07 AM EDT. 1 Comment
LLVM
The LLVM compiler stack is about to merge its support for the BFloat16 floating-point format, including the BF16 C language support.

BFloat16 is the 16-bit number format designed for machine learning algorithms for lessened storage requirements and greater performance.

Arm has been pushing along the BFloat16 support for LLVM with ARMv8.6-A supporting the new format. But this BFloat16 LLVM support is also relevant ultimately for Intel AVX-512 BF16, Intel Nervana, Google Cloud TPUs, and other hardware coming out with BF16 support to bolster their machine learning capabilities.

BFloat support for the LLVM IR is under review and nearing the merging point along with the BFloat16 C type, IR intrinsics support, and the AArch32/AArch64 bits so far.

More details on this brain floating point support for LLVM via this mailing list recap of the currently pending patches on the Arm front.
Related News
About The Author
Michael Larabel

Michael Larabel is the principal author of Phoronix.com and founded the site in 2004 with a focus on enriching the Linux hardware experience. Michael has written more than 20,000 articles covering the state of Linux hardware support, Linux performance, graphics drivers, and other topics. Michael is also the lead developer of the Phoronix Test Suite, Phoromatic, and OpenBenchmarking.org automated benchmarking software. He can be followed via Twitter, LinkedIn, or contacted via MichaelLarabel.com.

Popular News This Week