Announcement

Collapse
No announcement yet.

Yes, Mesa Is Working Towards GLVND Support

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Yes, Mesa Is Working Towards GLVND Support

    Phoronix: Yes, Mesa Is Working Towards GLVND Support

    With yesterday's news about AMD planning GLVND support for their Linux driver -- which follows the NVIDIA 361 driver being the first to ship GLVND, years after NVIDIA began working on this OpenGL Vendor-Neutral Dispatch Library -- many have been wondering how Mesa fits into the equation...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Wouldn't it make OpenGL even slower? If I understand it correctly, this brand new lib have to actually redirect calls into real lib, etc?

    Comment


    • #3
      Originally posted by SystemCrasher View Post
      Wouldn't it make OpenGL even slower? If I understand it correctly, this brand new lib have to actually redirect calls into real lib, etc?
      Do you really think Nvidia would add something that slows down their implementation ;-)

      libGL.so is a wrapper around libGLX and libGLdispatch. Conceptually, it
      just exports GL and GLX functions and forwards them to libGLX and
      libGLdispatch to deal with. The implementation is a bit more complicated
      to avoid the overhead of an extra indirect jump every time an app calls
      an OpenGL function.
      I actually haven't looked into any of the code myself but I recall from the presentation the Nvidia guys gave that there was potential to have lower overhead vs Mesa's existing dispatch code.
      Last edited by tarceri; 17 January 2016, 10:28 AM.

      Comment


      • #4
        Originally posted by SystemCrasher View Post
        Wouldn't it make OpenGL even slower? If I understand it correctly, this brand new lib have to actually redirect calls into real lib, etc?
        Yes, they said that somewhere (I don't remember where).

        Edit: Here's a solution suggestion from the original ABI proposal

        (5) Even though it is 2012, some important OpenGL applications still use immediate mode OpenGL in very API-heavy ways. In such cases, even just minimal dispatching overhead has a significant impact on performance. How can we mitigate the performance impact in such scenarios? PROPOSED: The performant solution is to online-generate code directly into the top-level function of the API library. The API library should provide a function that vendors can call, when they deem it thread-safe to do so, that replaces the code in an API library function with vendor-provided code.
        Last edited by abu_shawarib; 17 January 2016, 10:29 AM.

        Comment


        • #5
          It doesn't handle two different vendor libraries for the same X screen, although the ABI is set up such that it would be possible to add that capability later on.
          That's unfortunate, because that's probably what 95% of people want. Render Gnome with the Intel iGPU and any game gets dispatched to NVIDIAs OpenGL implementation. The question is how it could figure something like that out on its own. There is probably no way to find out which program is 3D intensive and who of the installed vendor drivers commands the GPU with the most muscles.

          Comment


          • #6
            Very good project.
            Looking forward to being able to use multiple GPU's simultaneously.
            If only there was a way to do this even without a common library as a pass-through library.
            I would like to see some sort of Dependency Injection Management system instead of messing around with names or requiring all sorts of pass-through libraries.
            Last edited by plonoma; 17 January 2016, 11:49 AM.

            Comment


            • #7
              Originally posted by SystemCrasher View Post
              Wouldn't it make OpenGL even slower? If I understand it correctly, this brand new lib have to actually redirect calls into real lib, etc?
              I asked myself the same question, but I think the only function calls which have to be redirected are those of glXGetProcAddress​.

              Comment


              • #8
                Originally posted by tarceri View Post

                Do you really think Nvidia would add something that slows down their implementation ;-)
                Well, that's what made me really curious. I can imagine at some point they can care about simplifying their life more than about anything else. They had quite a bumpy road in Linux. But they also added plenty of bumps on nouveau's road, so they shouldn't complain about it.

                And so I'm curious how much harm this initative is going to land on my head while bringing no any advantages (because I do not run stinky blobs). Or maybe I'm missed some quite fancy system level magic, allowing to replace lib with no overhead. In this case I'm curious to learn this trick since it sounds like it can be useful in quite many cases.

                Comment


                • #9
                  Originally posted by jf33 View Post
                  I asked myself the same question, but I think the only function calls which have to be redirected are those of glXGetProcAddress​.
                  Hmm, interesting. This way it would really keep extra overhead to the minimum.

                  Comment


                  • #10
                    Originally posted by blackout23 View Post
                    That's unfortunate, because that's probably what 95% of people want. Render Gnome with the Intel iGPU and any game gets dispatched to NVIDIAs OpenGL implementation.
                    I'm not expert, but it seems to me that internally, a iGPU and a dGPU are not even the same X screen, they are 2 different device with the output of one piped to the other. (It's as if the game was running in another Xserver, but everything ends up composited on the same display).
                    See bumblebee and other such hacks to get it working.
                    Or the whole concept of render nodes that is up-coming with DRI3.

                    Whereas:
                    It doesn't handle two different vendor libraries for the same X screen, although the ABI is set up such that it would be possible to add that capability later on.
                    The way I read this, is that you can't run Nouveau and Nvidia's proprietary drivers on the same card.
                    Or Radeon/-SI and Catalyst (even if both in the future are supposed to use the same stack underneath and share an AMDGPU module).

                    Comment

                    Working...
                    X