Announcement

Collapse
No announcement yet.

PlaidML Deep Learning Framework Benchmarks With OpenCL On NVIDIA & AMD GPUs

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • PlaidML Deep Learning Framework Benchmarks With OpenCL On NVIDIA & AMD GPUs

    Phoronix: PlaidML Deep Learning Framework Benchmarks With OpenCL On NVIDIA & AMD GPUs

    Pointed out by a Phoronix reader a few days ago and added to the Phoronix Test Suite is the PlaidML deep learning framework that can run on CPUs using BLAS or also on GPUs and other accelerators via OpenCL. Here are our initial benchmarks of this OpenCL-based deep learning framework that is now being developed as part of Intel's AI Group and tested across a variety of AMD Radeon and NVIDIA GeForce graphics cards.

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Typos

    Originally posted by phoronix View Post
    Over the weekend I carried out a wide variety of benchmarks with PlaidML and its OpenCL back-end ofr both NVIDIA and AMD graphics cards.
    Originally posted by phoronix View Post
    Even the Radeon RX Vega 64 was comign in shy of the GTX 1060 and GTX 980.
    We may need Vega 128... even if it consumes 600W...
    Last edited by tildearrow; 14 January 2019, 05:23 PM.

    Comment


    • #3
      Well that's horrific.

      Comment


      • #4
        ROCm might need a little more work. If you're doing any type of ML, DL, Tensor things, basically there is only nVidia.

        Comment


        • #5
          These results are clearly very bad for rocm. But at this point of abysmal performance, one can wonder if the application is not skewed towards NVidia, maybe in the choice of workgroup parameters that fit well NVidia or other stuffs.

          Comment


          • #6
            Yep - I haven't looked at PlaidML specifically but I expect we are going to have to go through the same kind of transition we did with graphics drivers... initially app developers optimized for NVidia proprietary drivers only, but now they are testing and tuning on the open source drivers instead and putting more time/attention into AMD support, which shows up in the benchmark results.

            One thing we used to see is explicit coding for 32-thread wavefronts (AMD GPUs use 64-thread) which leaves half the hardware unused.
            Test signature

            Comment

            Working...
            X