Announcement

Collapse
No announcement yet.

Intel's MKL-DNN/DNNL 2.0 Beta 3 Release Adds SYCL + Data Parallel C++ Compiler

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Intel's MKL-DNN/DNNL 2.0 Beta 3 Release Adds SYCL + Data Parallel C++ Compiler

    Phoronix: Intel's MKL-DNN/DNNL 2.0 Beta 3 Release Adds SYCL + Data Parallel C++ Compiler

    Intel's MKL-DNN Deep Neural Network Library (DNNL) that is open-source and catering to deep learning applications like Tensorflow, PyTorch, DeepLearning4J, and others is nearing its version 2.0 release. With DNNL 2.0 is now support for Data Parallel C++ as Intel's new language as part of their oneAPI initiative...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Typo:

    Originally posted by phoronix View Post
    by The Khoronos Group and

    Comment


    • #3
      And for the CPU part look, here:


      [-]
      ...and at run-time a "master function" uses the CPUID instruction to select a version most appropriate for the current CPU. However, as long as the master function detects a non-Intel CPU, it almost always chooses the most basic (and slowest) function to use, regardless of what instruction sets the CPU claims to support. This has netted the system a nickname of "cripple AMD" routine since 2009.[10] As of 2019, Intels MKL, which remains the numeric library installed by default along with many pre-compiled mathematical applications on Windows (such as NumPy, SymPy, and MATLAB), still significantly underperforms on AMD CPUs by ignoring their supported instruction sets.[11] However, setting the environment variable MKL_DEBUG_CPU_TYPE=5 can be used to override the vendor string dependent codepath choice and activate AVX2 instructions on AMD processor based systems resulting in equal or even better performance when compared to Intel CPUs.[12][13]

      Comment


      • #4
        Let's make a Cripple Intel then.

        (Well, technically we have been doing so with Meltdown)

        Comment

        Working...
        X