Announcement

Collapse
No announcement yet.

AMDKFD GPUVM Support Updated For Discrete Radeon GPUs, Adds Userptr Support

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • AMDKFD GPUVM Support Updated For Discrete Radeon GPUs, Adds Userptr Support

    Phoronix: AMDKFD GPUVM Support Updated For Discrete Radeon GPUs, Adds Userptr Support

    Unfortunately the AMDKFD GPUVM support for discrete GPUs isn't looking like it will make it for the Linux 4.17 kernel cycle...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    BTW, are ROCK/kfd changes upstream yet so the ROCm will eventually work on mainline?

    Comment


    • #3
      Originally posted by juno View Post
      BTW, are ROCK/kfd changes upstream yet so the ROCm will eventually work on mainline?
      Not sure I understand the question. That is what the article is about - getting the last big chunk of kfd patches upstream. So not yet but on the way.

      There are some patches in ROCK which may have to stay out-of-tree, eg changes to limits on how much memory can be used or locked by a single process, but those are more "specific to compute workloads and scenarios" rather than "specific to KFD".
      Test signature

      Comment


      • #4
        If this is direct discrete GPU support for Virtual Machines my only response would be, ``who gives a rusty ....?'' We haven't gotten working OpenCL in the FOSS stack beyond 1.1 and that's what most of us having been clamoring for. I'm sure the datacenter folks needing GPUVM can wait a bit longer. The rest of us have been waiting nearly two years.

        Comment


        • #5
          Originally posted by Marc Driftmeyer View Post
          If this is direct discrete GPU support for Virtual Machines my only response would be, ``who gives a rusty ....?''
          It's "Virtual Memory" rather than "Virtual Machine" - the GPU page table code that you need before you can run any ROC application.

          There seems to be a convention of referring to the setup for a new process's virtual address space as "a VM" (things like VMID allocation & page table setup) but I don't *think* that convention is specific to ATI/AMD. It may just be that there is no other short / convenient / obvious initialism for a process-specific virtual address space so the term "VM" gets overloaded.

          Originally posted by Marc Driftmeyer View Post
          We haven't gotten working OpenCL in the FOSS stack beyond 1.1 and that's what most of us having been clamoring for. I'm sure the datacenter folks needing GPUVM can wait a bit longer. The rest of us have been waiting nearly two years.
          I thought OpenCL on ROC was already at 1.2 runtime with 2.0 kernel language support ?
          Last edited by bridgman; 18 March 2018, 03:30 PM.
          Test signature

          Comment


          • #6
            Originally posted by bridgman View Post
            I thought OpenCL on ROC was already at 1.2 runtime with 2.0 kernel language support ?
            It is. Marc seems to be thinking of the Mesa OpenCL driver which is at 1.1.

            By the way, while all the experts are here I'm just going to blurp out a question. I'm running ROCm OpenCL on two Fury's on Debian Unstable (aka. Sid) using the repo for Ubuntu. The kernel module doesn't work but I managed to get everything running using this kernel:
            AMDGPU Driver with KFD used by the ROCm project. Also contains the current Linux Kernel that matches this base driver - ROCm/ROCK-Kernel-Driver


            That kernel is a bit old so I would rather run on something more recent. I found a branch called amdkfd-next here:


            I would think the latter should work, but it doesn't, and clinfo says I have no OpenCL devices when running that kernel.

            So my question is, why doesn't amdkfd-next work with my ROCm OpenCL driver, and what could I do to make it work?

            Sorry for dumb question. I can compile a kernel from source but that's pretty much as far as my knowledge goes.

            Comment


            • #7
              Originally posted by Lucretia
              and can the OpenCL part be used on non-PCIe 3.0 machines yet? You mentioned it was going to be separated from ROCm.
              It turns out that we may be able to run the rest of the ROC stack without PCIE atomics as well. My understanding (subject to confirmation by testing) is that on Vega at least the MEC microcode no longer uses atomic operations, and substitutes regular read-modify-write operations instead. What I'm not sure of yet is whether the driver still checks for atomics and fails if they are not available - I suspect it does. Stay tuned.

              There are a couple of corner cases affected by that (eg sharing a single completion event between multiple queues) but apparently nothing that anyone actually uses.
              Test signature

              Comment


              • #8
                Originally posted by Brisse View Post
                I found a branch called amdkfd-next here:


                I would think the latter should work, but it doesn't, and clinfo says I have no OpenCL devices when running that kernel.

                So my question is, why doesn't amdkfd-next work with my ROCm OpenCL driver, and what could I do to make it work?
                Fine question. I don't think that branch has Felix's latest patches yet - so far they have only gone to the mailing list for review.
                Test signature

                Comment


                • #9
                  Originally posted by Lucretia
                  From what I can see that Oded's branch has been merged into Alex's drm-next-4.17 branch, unless I'm missing something.
                  Probably, although I expect the path was actually:

                  - Dave pull's Oded's branch into his drm-next
                  - Alex rebases his branch on Dave's branch
                  Test signature

                  Comment


                  • #10
                    Originally posted by Brisse View Post

                    It is. Marc seems to be thinking of the Mesa OpenCL driver which is at 1.1.

                    By the way, while all the experts are here I'm just going to blurp out a question. I'm running ROCm OpenCL on two Fury's on Debian Unstable (aka. Sid) using the repo for Ubuntu. The kernel module doesn't work but I managed to get everything running using this kernel:
                    AMDGPU Driver with KFD used by the ROCm project. Also contains the current Linux Kernel that matches this base driver - ROCm/ROCK-Kernel-Driver


                    That kernel is a bit old so I would rather run on something more recent. I found a branch called amdkfd-next here:


                    I would think the latter should work, but it doesn't, and clinfo says I have no OpenCL devices when running that kernel.

                    So my question is, why doesn't amdkfd-next work with my ROCm OpenCL driver, and what could I do to make it work?

                    Sorry for dumb question. I can compile a kernel from source but that's pretty much as far as my knowledge goes.
                    Of course I have AMDGPU PRO OpenCL stack running on Debian Sid. I'm waiting for the integration to actually release the next version as Mesa 18 compliant or have the AMD OpenCL stack integrated properly into the Kernel and not have to muck with it any more.

                    As it stands, anything beyond Mesa 17.2.5 in Debian [which doesn't exist for .6, .7, .8 of the 17.2.x branch] won't work and have OpenCL AMDGPU Pro be recognized in Blender or any OpenCL 1.2 required app.

                    FYI: If you read the Dev Comments regarding PCI-E 3.x requirement for Discrete, this no longer a requirement.

                    Conditional based Atomics now.



                    Comment: https://patchwork.freedesktop.org/patch/196060/
                    Last edited by Marc Driftmeyer; 18 March 2018, 07:04 PM.

                    Comment

                    Working...
                    X