Announcement

Collapse
No announcement yet.

Google Continues Working On CUDA Compiler Optimizations In LLVM

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Google Continues Working On CUDA Compiler Optimizations In LLVM

    Phoronix: Google Continues Working On CUDA Compiler Optimizations In LLVM

    While it will offend some that Google continues to be investing in NVIDIA's CUDA GPGPU language rather than an open standard like OpenCL, the Google engineers continue making progress on a speedy, open-source CUDA with LLVM...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    So, Google guys are smart enough to create CUDA compiler from scratch that would outperform Nvidia's realisation but couldn't invest in OpenCL to change the tides on HPC field? The same Google that promotes OSS and cross vendor as a selling point for their phones and services just showed middle finger to both =/

    Comment


    • #3
      Google is smart. Nobody wants to write a low level code to call a function on a GPU. OpenCL is awful. NVidia got it right by extending C++ to hide these details. In OpenCL you have to do stupid junk like create an architecture context, them create a compiler context and then load the code into a buffer, then call the compiler, then create a kernel context, then create a kernel, to call a function. I repeat, OpenCL sucks.

      Comment


      • #4
        nslay, if you want to target only one device, the whole boilerplate can be hidden behind a single function. You seem to have a problem with this only, or is more wrong with OpenCL?

        I'm not complaining. Google work on the the code-optimisers can be reused in combination with the OpenCL frontend.

        Comment


        • #5
          Originally posted by nslay View Post
          Google is smart. Nobody wants to write a low level code to call a function on a GPU. OpenCL is awful. NVidia got it right by extending C++ to hide these details. In OpenCL you have to do stupid junk like create an architecture context, them create a compiler context and then load the code into a buffer, then call the compiler, then create a kernel context, then create a kernel, to call a function. I repeat, OpenCL sucks.
          With SYCL (https://www.khronos.org/sycl) this largely goes away. You can write your kernels directly in C++ which makes it easier to experiment with oflloading existing code to the GPU.

          Comment

          Working...
          X