Announcement

Collapse
No announcement yet.

Intel Compute Runtime 24.13.29138.7 Brings Improved OpenCL/OpenGL Sharing

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Intel Compute Runtime 24.13.29138.7 Brings Improved OpenCL/OpenGL Sharing

    Phoronix: Intel Compute Runtime 24.13.29138.7 Brings Improved OpenCL/OpenGL Sharing

    As the first new release to Intel's open-source Compute Runtime stack in about one month for this OpenCL and Level Zero compute support, Intel Compute Runtime 24.13.29138.7 was released this morning with much improved OpenCL/OpenGL sharing and interoperability on Linux, out-of-the-box support for the Xe kernel graphics driver, new optimizations, and many other changes...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    This is why AMD needs to abandon ROCm and wholesale adopt oneAPI as has the UXL Foundation. ROCm will never become a viable alternative to CUDA. Never. AMD makes GREAT hardware. Best in the x86-64 world. But they have never in their history proved they can achieve that status in software and API frameworks. Never. The market has never accepted their frameworks for any length of time. And not only is ROCm an algorithmic mess it’s also a mess to install and maintain. And that dooms ROCm even more than its insufficiencies compared to oneAPI and CUDA. And let’s not even go down the pitiful route of CPU and GPU support. It’s laughable compared to CUDA and even oneAPI. For GOD’S SAKE Intel has support back to Skylake.

    AMD…Lisa Su…your NUMBER 1 mission is to make the best x86-64 and GPUs that run oneAPI flawlessly and better than any Intel solution. That’s the best revenge you can have against Intel. Take their stuff and their IP and wreck their own hardware. The sales will follow.

    Comment


    • #3
      what is this even for exactly? what exactly is a "compute runtime"? im pretty sure i already have drivers and libraries and things installed. so what is this?

      Comment


      • #4
        Originally posted by quaz0r View Post
        what is this even for exactly? what exactly is a "compute runtime"? im pretty sure i already have drivers and libraries and things installed. so what is this?
        As mentioned in the article, OpenCL and Leveo Zero compute on Intel iGPU/dGPUs.
        Michael Larabel
        https://www.michaellarabel.com/

        Comment


        • #5
          Originally posted by quaz0r View Post
          what is this even for exactly? what exactly is a "compute runtime"? im pretty sure i already have drivers and libraries and things installed. so what is this?
          Yes, you have all those things mentioned but they are there mostly for rendering some object on your screen. A compute runtime allows your GPU to do compute jobs that at one time were solely the domain of the CPU and it’s built in math co-processor. For instance, facial recognition from a video stream, analysis of the fluid dynamics of air over the surface of a certain type of wing geometry, AI inference, etc. In other words compute runtimes literally use GPUs as a sort of a Supercomputer on a Card for the compute task at hand which has nothing to do with visualization or “graphics” hence the name Graphics Processing Unit or GPU. Although after the compute task is complete you can turn around and use that same GPU to do the visualization of a compute task depending on the compute task. The CPU at this point becomes more or less a task manager. Also in a way the GPU is just the logical extension of the original math co-processors back in the day when CPUs did not have them built into the CPU die itself. I actually had an AMD 286 16 bit powered laptop in 1990 that had a slot whereby you could insert a math co-processor chip to offload mathematics code from the CPU. I inserted a Cyrix math co-processor into that slot and began to run rings around the more advanced desktops in the meteorological lab which had Intel 386SX 32 bit chips that had NO math co-processor so that the CPU was bogged down by the additional compute load of high level maths IN ADDITION to what ever else the CPU was being tasked to do. Of course there was a type of primitive logic in the code that would recognize the math co-processor and direct most mathematic operations to it instead of the CPU. Think of it as an algorithmic turbo-charger. Todays GPUs particularly with a compute runtime can now be considered as literally an entirely separate computer itself residing in your case along side your CPU that one typically considers as their computer.

          Comment


          • #6
            Originally posted by wikipedia
            In psychology, theory of mind refers to the capacity to understand other people by ascribing mental states to them. A theory of mind includes the knowledge that others' beliefs, desires, intentions, emotions, and thoughts may be different from one's own. Possessing a functional theory of mind is crucial for success in everyday human social interactions. People utilize a theory of mind when analyzing, judging, and inferring others' behaviors. The discovery and development of theory of mind primarily came from studies done with animals and infants. Factors including drug and alcohol consumption, language development, cognitive delays, age, and culture can affect a person's capacity to display theory of mind. Having a theory of mind is similar to but not identical with having the capacity for empathy or sympathy.​
            The longer I've been alive, the more I repeatedly run into this, and I think the heart of the issue has to do with theory of mind, or the stark lack thereof in a lot of people, possibly particularly in the segment of the population who tends to be more technically oriented.

            Jumbotron, your response is so sincere and earnest it brought a big smile to my face I too recall those days. I remember trying to figure out if I was going to be able to run Quake on my 486. I think they said you needed a math coprocessor and I was like shit do I have that? LOL

            Originally posted by Jumbotron
            The CPU at this point becomes more or less a task manager
            I've been programming as a hobby for many years. To my mind, everything is either a program or a library. I suppose you could come up with a few seemingly different things to mention if you really tried. There are interpreted languages, so if you "ran" a python script for instance the script itself isnt a "program" or an "executable" in the traditional sense. But what is actually being "run", the interpreter, is of course still a program. Programs and libraries. Everything comes down to programs and libraries.

            I've been curious about GPU programming for a while now, though haven't gotten around to fully diving in to it yet. I took a brief glance at what using OpenCL might look like from a programming perspective once. From what I could tell, it looked like you would have your OpenCL source file, you would load it into a buffer in your C/C++ program or whatever, then you would pass that to an OpenCL function from the OpenCL library which would then compile your OpenCL kernel, maybe another function call to run the kernel, and then you would get back whatever information was calculated in memory that you would then of course go on to utilize in your main program.

            So, to summarize, in my mind OpenCL is a library. And a library of course is a set of functions that one calls from a program.

            Now let us turn our attention to the subject of this phoronix article, the "Intel Compute Runtime". Right off the bat I'm confused. "Runtime"? I don't know what this language is about, or what it's referring to. Again, to me, everything is either a program or a library. A library is a set of functions that one calls from a program. A program is "run", while one does not generally refer to "running" a library. So when we use language like "runtime", we are referring to ... a program?

            Let's take a further look at the language we encounter here.

            Originally posted by Michael
            As the first new release to Intel's open-source Compute Runtime stack in about one month
            "Runtime stack." OK, well, a stack, one could say, generally refers to a collection of more than one thing. So a "runtime stack", I guess, must be ... a set of multiple programs?

            Originally posted by Michael
            for this OpenCL and Level Zero compute support
            Hmm, I'm not specifically familiar with what Level Zero is, but I was at least pretty sure that OpenCL is a library. So what are we talking about? We've taken something that as far as I'm aware has always been implemented as a library, and implemented it here as ... a set of "runtime" programs? Is that really what happened? Or is there something weird happening with the language here? Let me take a look at what Level Zero is. I'm guessing it's a new kind of library/API they came up with to interface the hardware. Let's see if I'm right.

            Originally posted by Intel
            The objective of the oneAPI Level Zero API is to provide direct-to-metal interfaces to offload accelerator devices. It is a programming interface that can be published at a cadence that better matches Intel hardware releases and can be tailored to any device needs. It can be adapted to support broader set of languages features, such as function pointers, virtual functions, unified memory, and I/O capabilities.​
            OK, I think I was right. That sounds a lot like a library to me. So we have OpenCL, a library, and this Level Zero thing, which apparently also is a library. So I'm still just as confused as ever.

            Originally posted by Michael
            This user-space driver stack
            OK, so when people say "drivers" I think they are generally referring to a kernel module, a library, or both. I don't think I've ever seen a program or set of programs referred to as a "driver." So still I have no idea what we're actually talking about here. Let's take a look at the github page for the "Intel Compute Runtime."

            Originally posted by Intel
            The Intel(R) Graphics Compute Runtime for oneAPI Level Zero and OpenCL(TM) Driver is an open source project providing compute API support (Level Zero, OpenCL) for Intel graphics hardware architectures (HD Graphics, Xe).
            OK, it "provides" the API. So it's a library. It must be a library. It just must be.

            Originally posted by Intel
            NEO is the shorthand name for Compute Runtime contained within this repository. It is also a development mindset that we adopted when we first started the implementation effort for OpenCL. The project evolved beyond a single API and NEO no longer implies a specific API.
            OK, it's not an API. In fact, it's a mindset. What? I feel like my very first initial impression that something weird is happening here, that something weird is happening with the language here, was spot-on.

            Originally posted by Intel
            Directly linking to the runtime library is not supported
            "Runtime library", I still don't know what this language means. And linking to it is not supported? What does that mean? You can't link to it, so ... it's not a library? I don't get it.

            Originally posted by Intel
            ​Oh, I see, if you want to use Level Zero, link to the Level Zero library over here. And if you want to use OpenCL, link to the OpenCL library over here. Of course. So ... what in all holy fuck is the "Intel Compute Runtime" then? I still don't know.

            Comment

            Working...
            X