NVIDIA CUDA 6 Makes GPGPU Programming Simpler

Written by Michael Larabel in NVIDIA on 14 November 2013 at 11:19 AM EST. 7 Comments
NVIDIA
NVIDIA rolled out CUDA version 6 this morning, their latest major update to their Compute Unified Device Architecture for GPGPU / parallel programming. With CUDA 6, NVIDIA says its now simpler to achieve better parallel programming on the GPU.

NVIDIA claims developers can accelerate their applications by eight times by simply replacing CPU-based libraries with the CUDA-based alternatives. CUDA 6 provides unified memory support for accessing CPU and GPU memory without copies, new drop-in libraries for BLAS and FFTW calculations on the GPU, new multi-GPU scaling support, and various other changes.

More details on NVIDIA CUDA 6 can be found via the NVIDIA Newsroom. The unified memory support makes sense and was expected considering in a recent NVIDIA Linux driver update they introduced a new Unified Kernel Memory module for unifying the memory space between the GPU's video memory and the system's RAM.


For those that missed the earlier news items, NVIDIA is dropping 32-bit Linux CUDA support and we have a lot of new NVIDIA Linux GPU benchmarks coming!
Related News
About The Author
Michael Larabel

Michael Larabel is the principal author of Phoronix.com and founded the site in 2004 with a focus on enriching the Linux hardware experience. Michael has written more than 20,000 articles covering the state of Linux hardware support, Linux performance, graphics drivers, and other topics. Michael is also the lead developer of the Phoronix Test Suite, Phoromatic, and OpenBenchmarking.org automated benchmarking software. He can be followed via Twitter, LinkedIn, or contacted via MichaelLarabel.com.

Popular News This Week