Google Working On NVIDIA CUDA Support Via LLVM?
For whatever reason, there's Google developers working on CUDA improvements within the LLVM/Clang compiler.
Over the years I've heard various statements on both sides about Google engaging in GPGPU computing and all sorts of things from Google's interest in CUDA to OpenCL. The latest signs of work are some CUDA improvements to LLVM's Clang compiler courtesy of a Google engineer.
While the intentions aren't clear, there's been some NVIDIA CUDA (Compute Unified Device Architecture) improvements landing in mainline LLVM/Clang today via Google's Artem Belevich. Among the CUDA contributions from this Google engineer are support for CUDA built-in variables and other work. There's also been other interesting signs of activity in the past. Regardless of business interests, it's great to see more open-source GPGPU work happening.
Is there some announcement I missed out on about Google's usage/interest of CUDA? If there's anything interesting I missed, please point it out via the forums or via Facebook or Twitter.
Over the years I've heard various statements on both sides about Google engaging in GPGPU computing and all sorts of things from Google's interest in CUDA to OpenCL. The latest signs of work are some CUDA improvements to LLVM's Clang compiler courtesy of a Google engineer.
While the intentions aren't clear, there's been some NVIDIA CUDA (Compute Unified Device Architecture) improvements landing in mainline LLVM/Clang today via Google's Artem Belevich. Among the CUDA contributions from this Google engineer are support for CUDA built-in variables and other work. There's also been other interesting signs of activity in the past. Regardless of business interests, it's great to see more open-source GPGPU work happening.
Is there some announcement I missed out on about Google's usage/interest of CUDA? If there's anything interesting I missed, please point it out via the forums or via Facebook or Twitter.
5 Comments