NVIDIA Talks Up Numba For GPGPU Computing With Python
Numba is designed to allow for high performance Python JIT-compiled code designing for C/C++ levels of performance while using LLVM for optimizations and allowing GPU offloading too. NVIDIA is promoting Numba in the context of CUDA.
Numba supports both multi-core CPUs and CUDA-supported GPUs for accelerating math heavy Python code. Its design allows for CUDA kernels to be written in Python syntax and seamlessly execute on the GPU. Numba is fully open-source, allows for rapid prototyping, a CUDA simulator is also available, and it also allows for cluster computing over a network.
Those wishing to learn more can see NVIDIA's Seven Things You Might Not Know About Numba. Or visit numba.pydata.org for the official project site.
Numba supports both multi-core CPUs and CUDA-supported GPUs for accelerating math heavy Python code. Its design allows for CUDA kernels to be written in Python syntax and seamlessly execute on the GPU. Numba is fully open-source, allows for rapid prototyping, a CUDA simulator is also available, and it also allows for cluster computing over a network.
Those wishing to learn more can see NVIDIA's Seven Things You Might Not Know About Numba. Or visit numba.pydata.org for the official project site.
6 Comments