Announcement

Collapse
No announcement yet.

Radeon Open Compute 1.3 Platform Brings Polaris & Other Features

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Radeon Open Compute 1.3 Platform Brings Polaris & Other Features

    Phoronix: Radeon Open Compute 1.3 Platform Brings Polaris & Other Features

    AMD used the SC16 super-computing conference today announce version 1.3 of the Radeon Open Compute platform...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Nice, but few questions remain:
    Is the "HCC" stuff coming into mainline LLVM? http://phoronix.com/scan.php?page=ne...U-F16-Lands-VI GitHub links on the overview page regarding hcc are down/not public (#1, #2).
    Is ROCm now becoming the preferred solution for running OpenCL on Linux desktops, too? Will there be packages in distributions or will it be included in the amdgpu-pro package anytime soon?
    What does the virtualisation stuff mean?
    Originally posted by AMD
    ROCm Virtualization of the GPU hardware via OS Containers and Linux®'s Kernel Virtual Machine (KVM) - ROCm now supports Docker containerization, allowing end-users to simplify the deployment of an application in ROCm-enabled Linux server environments. ROCm also supports GPU Hardware Virtualization via KVM pass-through to allow the benefits of hardware-accelerated GPU computing in virtualized solutions.
    I have been able to use kvm pass-through before, what changed?
    Support for docker means I would be running kfd on the host and rocm in containers?

    Comment


    • #3


      That slide sounds like we finally get open source OpenCL on 14th of December?

      Comment


      • #4
        Originally posted by mibo View Post
        http://www.planet3dnow.de/cms/wp-con...-ROCm-SC16.pngThat slide sounds like we finally get open source OpenCL on 14th of December?
        I think it means you get OpenCL running over ROCm on 14th of December, and that is the version we are working on open sourcing.

        Originally posted by juno View Post
        Is the "HCC" stuff coming into mainline LLVM? http://phoronix.com/scan.php?page=ne...U-F16-Lands-VI GitHub links on the overview page regarding hcc are down/not public (#1, #2).
        I'm not sure how much of HCC is going into LLVM, but the ISA generation portion definitely is, and is shared with the backend used by the radeonsi graphics driver.

        Originally posted by juno View Post
        Is ROCm now becoming the preferred solution for running OpenCL on Linux desktops, too? Will there be packages in distributions or will it be included in the amdgpu-pro package anytime soon?
        Yes, we are in the process of replacing the current GPU backend (what we call "Orca" internally) going through graphics driver paths with one that runs through the ROCm stack instead. We don't plan to replace the current GPU backend for all of the currently supported hardware, just one or two of the ROCm-supported dGPUs AFAIK, but from Vega10 and on we will only be using the ROCm paths.

        Originally posted by juno View Post
        What does the virtualisation stuff mean?
        I have been able to use kvm pass-through before, what changed?
        Support for docker means I would be running kfd on the host and rocm in containers?
        re: virtualization it probably means we are testing under virtualization now, but I haven't gone through the slides in detail yet.

        re: docker, strictly speaking KFD is part of ROCm as well so you would probably be running the ROC runtime and application in a container. I haven't been very close to container usage, will check.
        Last edited by bridgman; 14 November 2016, 06:11 PM.
        Test signature

        Comment


        • #5
          Originally posted by bridgman View Post

          I think it means you get OpenCL running over ROCm on 14th of December, and that is the version we are working on open sourcing.

          I'm not sure how much of HCC is going into LLVM, but the ISA generation portion definitely is, and is shared with the backend used by the radeonsi graphics driver.

          Yes, we are in the process of replacing the current GPU backend (what we call "Orca" internally) going through graphics driver paths with one that runs through the ROCm stack instead. We don't plan to replace the current GPU backend for all of the currently supported hardware, just one or two of the ROCm-supported dGPUs AFAIK, but from Vega10 and on we will only be using the ROCm paths.

          re: virtualization it probably means we are testing under virtualization now, but I haven't gone through the slides in detail yet.

          re: docker, strictly speaking KFD is part of ROCm as well so you would probably be running the ROC runtime and application in a container. I haven't been very close to container usage, will check.
          I'm afraid I still don't grasp what it actually means. Can you explain it to me like I'm five? Is ROC a software suite or just drivers and stuff?

          Comment


          • #6
            I'm probably as puzzled as Azpegath.
            The way I understand it: ROCm is an own software stack not intended to be used for graphics, but for computing - so OpenCL centered. Based on the ROCK kernel-driver instead of amdgpu.

            Is this driver meant to co-exist with amdgpu? Is it a replacement?
            Will it be upstreamed, and if not - why not?
            Why is there an extra kernel driver for that? Will OpenCL never be useable on a desktop one can also game on?

            Does virtualization mean vfio passthrough? And if so, why does amdgpu not support it? Well; it does support it. Once.

            Comment


            • #7
              @bridgman: can't see my post right now (disappeared?), but thanks for the answers anyway
              Do you think it will be usable

              Originally posted by bridgman View Post
              Yes, we are in the process of replacing the current GPU backend (what we call "Orca" internally) going through graphics driver paths with one that runs through the ROCm stack instead. We don't plan to replace the current GPU backend for all of the currently supported hardware, just one or two of the ROCm-supported dGPUs AFAIK, but from Vega10 and on we will only be using the ROCm paths.
              Do you think it will be possible to use it with supported hardware anyways, like CIK or SI owners can choose to use amdgpu right now?

              Comment


              • #8
                Ahh, OK... none of my answers will help much if it's not clear what ROC is.

                ROC is Radeon Open Compute, what you think of as "the HSA stack" extended to support non-HSA-compliant dGPUs (hence the need for a different name) and with additional features specific to high performance computing.

                ---------------------

                The ROC stack includes a few different components:

                1. KFD (drivers/gpu/drm/amd/amdkfd)

                The KFD (originally "Kernel Fusion Driver" back when HSA was called FSA) exposes an additional set of IOCTLs to user space, initially designed around usermode queues and GPU access to unpinned memory via ATS/PRI protocol between ATC (Address Translation Cache) in the GPU and IOMMUv2 in the CPU. Kaveri was the first part to provide native support, and Carrizo added context switching aka Compute Wave Save/Restore.

                For anyone not familiar with the ATC/IOMMUv2 combination it allows a dGPU to access unpinned memory, and generates page faults at an IOMMU level which are then handled by the upstream IOMMUv2 driver, while ATC caches translations in the GPU and allows high performance access to system memory.

                For dGPU we are not able to rely on IOMMUv2 since (a) some of our target market uses Intel and other CPUs without IOMMUv2 and (b) dGPUs rely heavily on local VRAM/HBM which is not managed by IOMMUv2 anyways. As a result, the initial ROC implementation relied on pinning memory from userspace, which made it non-upstreamable. We recently finished implementation of an eviction mechanism which allows "pinned from userspace" memory to be evicted anyways (after temporarily disabling the associated userspace queues), which allows dGPU ROC programs to dynamically share physical memory with other DRM drivers, eg amdgpu/radeon and will hopefully allow dGPU support in KFD to go upstream.

                The KFD relies on radeon/amdgpu for HW initialization and most memory management operations, and primarily interacts with HSA/ROC-specific hardware added to CI and above in the form of the MEC (Micro-Engine Compute) blocks within CP. It also talks directly to the IOMMUv2 driver on APUs.

                2. Libhsakmt

                The libhsakmt code (sometimes referred to as "thunk" or Radeon Open Compute Thunk) performs the same function as libdrm but for the new IOCTLs - basically a userspace wrapper for the kernel driver functionality.

                3. ROC runtime

                This is the userspace driver that exposes ROC functionality to an application or toolchain (generally the latter). Unlike OpenGL or OpenCL the runtime does not include functions to submit work, just to create and manage userspace queues where the application/toolchain can submit work directly.

                4. HCC

                HCC compiler grew out of what was initially the Kalmar C++ AMP compiler, extended for C++17 and parallel STL... basically open standards catching up with proprietary ones.

                5. HIP

                HIP is a portability suite (tools + libraries) which do most of the work porting CUDA programs to a portable C++ 17 form which can run over NVCC or HCC.

                ---------------------

                Until all of the KFD code gets upstream we are shipping the ROC stack separately, but what we are doing is:

                - taking a copy of the amdgpu staging tree (the ROCM 1.3 release forked off agd5f's amd-staging-4.6 tree)
                - making some changes to amdgpu and ttm which are not upstreamable until the corresponding kfd code is upstream to use them
                - adding a much newer version of amdkfd with dGPU support and various HPC features
                - testing and publishing in a set with matching versions of libhsakmt, ROC runtime, HCC and HIP

                Once we are able to get dGPU support in amdkfd upstream the ROC stack will become just another part of the open source and PRO stacks.

                ---------------------

                Not sure what the plans are for device ID switching inside the OpenCL runtime - I expect that specific device IDs will use the ROC back-end while everything else will use the current Orca back-end (we don't want to break existing users), so may not be possible to run unsupported HW over ROCm until the OpenCL code gets open sourced.
                Last edited by bridgman; 14 November 2016, 08:42 PM.
                Test signature

                Comment


                • #9
                  Another few questions:

                  * Has anyone tested it?
                  * If I install it now as this page says, what will I be able to do with these packages? Can I resome OpenCL GPU rendering in Blender?

                  I'm tempted to try this out now.

                  Comment


                  • #10
                    Forgot one: What cards are supported? All GCN?

                    Comment

                    Working...
                    X