Announcement

Collapse
No announcement yet.

Mesa's Rusticl Lands Support For SPIR-V Programs

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Mesa's Rusticl Lands Support For SPIR-V Programs

    Phoronix: Mesa's Rusticl Lands Support For SPIR-V Programs

    It's been a while since there has been any major additions to Mesa's Rusticl OpenCL implementation led by Red Hat's Karol Herbst while today he merged support for SPIR-V programs to this Rust-written driver. This SPIR-V support is necessary for eventually supporting SYCL and HIP...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Amazing news... thank you Karol Herbst and of course the rest guys of RustiCL. I hope RustiCL will soon be able to replace Rocm...etc...

    Comment


    • #3
      Support for HIP? Interesting. When Rusticl was announced I was wondering, what was the point of supporting OpenCL, when Blender moved away from it.

      Comment


      • #4
        Nice. And hopefully radeonsi support will be merged next

        Comment


        • #5
          Originally posted by eszlari View Post
          Support for HIP? Interesting. When Rusticl was announced I was wondering, what was the point of supporting OpenCL, when Blender moved away from it.
          The really big deal is in machine learning workloads. They're all written against either ROCm (not the openCL part?) or CUDA. CUDA is still by far the best supported interface, and so nvidia still has a virtual monopoly in this space. It would be nice if rusticl's openCL implementation was supported so that this stuff could run anywhere.

          pytorch might eventually decide to implement an openCL backend as rusticl proves itself, but it's a lot of work. If rusticl developed a hip interface that matched the existing ROCm one that would short-cut this entire process. It's not super clear to me from the article if this is indeed a goal though.

          Comment


          • #6
            Originally posted by Developer12 View Post

            The really big deal is in machine learning workloads. They're all written against either ROCm (not the openCL part?) or CUDA. CUDA is still by far the best supported interface, and so nvidia still has a virtual monopoly in this space. It would be nice if rusticl's openCL implementation was supported so that this stuff could run anywhere.

            pytorch might eventually decide to implement an openCL backend as rusticl proves itself, but it's a lot of work. If rusticl developed a hip interface that matched the existing ROCm one that would short-cut this entire process. It's not super clear to me from the article if this is indeed a goal though.
            PyTorch is changing quickly with the upcoming 2.0 release and more generalized support for different compilation backends, but I doubt they will have native OpenCL support. Even some CPU functions are getting deprecated and are cuda/accelerator only, and direct opencl/vulkan ports of projects never seem to run well.

            One of the most promising things I've seen in action is LLVM's MLIR backend, with a very interesting implementation discussed here:

            https://github.com/huggingface/diffu...ent-1416567370

            In a complex real workload (Stable Diffusion), the AMD 7000 series is actually performant, while the "traditional" rocm (and onnx I think?) backend people were using before is slow as molasses.

            Comment

            Working...
            X