Announcement

Collapse
No announcement yet.

Feral Adding AMD_shader_info To RADV Vulkan Driver

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Feral Adding AMD_shader_info To RADV Vulkan Driver

    Phoronix: Feral Adding AMD_shader_info To RADV Vulkan Driver

    As further sign of Feral Interactive continuing to pursue Vulkan for their Linux games, a Feral developer today posted a patch for implementing the brand new AMD_shader_info extension for the RADV Mesa driver...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Apparently I can not understand some things.

    Is Vulkan vendor agnostic?
    Why these new extensions are AMD_ (something)?
    If people have Intel cards these extensions would not be useful to them?

    Comment


    • #3
      About 2 weeks ago John Carmack said he is better in optimizing than GPU driver programmers:



      Is nice to have Feral contributing to advance GPU drivers themselves, instead of siting down and waiting the paint dry. In my book that shows commitment on delivering the best port they can make.

      Comment


      • #4
        Originally posted by Eduardo Gäedke View Post
        Apparently I can not understand some things.

        Is Vulkan vendor agnostic?
        Why these new extensions are AMD_ (something)?
        If people have Intel cards these extensions would not be useful to them?
        Extensions proposed/developed by a vendor/organization are then prefixed with their specific identifier. Just as NV is or MESA even too with both OpenGL and Vulkan. Nothing stops any other vendor/organization from implementing that extension... It's only if decided later to make it officially part of the OpenGL or Vulkan standard does it then receive the 'KHR' (Khronos) prefix instead. Intel developers can and probably will end up implementing AMD_shader_info as it's useful to many parties.
        Michael Larabel
        https://www.michaellarabel.com/

        Comment


        • #5
          Originally posted by Eduardo Gäedke View Post
          Apparently I can not understand some things.

          Is Vulkan vendor agnostic?
          Why these new extensions are AMD_ (something)?
          If people have Intel cards these extensions would not be useful to them?
          I believe the purpose of Vulkan is to allow "close to the metal" programming, instead of thick layers of abstraction that OpenGL provides. That approach have the advantage of scrapping the last few percentage of performance of a GPU, at the cost of more complicated code. So is up to the programmer to choice easy of development or the better performance he/she can get.

          Also, since different GPU vendors implement things differently on their hardware, when you expose their intimate parts you need dedicated extensions, like this one Feral did.

          Comment


          • #6
            Originally posted by Eduardo Gäedke View Post
            Is Vulkan vendor agnostic?
            Yes and no. It offers many extensions that are used on most hardware, and hardware-specific extensions.

            Extensions themselves aren't limited, any vendor can support them at any time if they so choose. But there are some extensions that have any use only for some specific GPU hardware, and won't be supported by others just because they have different hardware.

            Extensions required by the Vulkan spec are mandatory for all, and usually the best extensions become standard for all, same as it was for openGL.

            Comment


            • #7
              Originally posted by Eduardo Gäedke View Post
              Apparently I can not understand some things.

              Is Vulkan vendor agnostic?
              Why these new extensions are AMD_ (something)?
              If people have Intel cards these extensions would not be useful to them?
              For this extension it's at least partly returning AMD-specific information (the shader statistics) so these wouldn't make sense on any other hardware. The disassembly is just specified to return a string, so theoretically any other vendor could implement that part of it if they wanted to.

              FWIW this is really just to assist with debugging/optimisation, it's not something we'd be using at runtime.

              Comment


              • #8
                Originally posted by M@GOid View Post
                About 2 weeks ago John Carmack said he is better in optimizing than GPU driver programmers:
                http://www.pcgamer.com/john-carmack-...r-programmers/
                That article is either a complete misunderstanding or just straight up dishonest (or "fake news") depending on how you want to look at it.

                What John actually talked about was driver-level optimizations breaking optimizations in the games themselves and vice versa. From what I've heard this is a regular occurrence and is just to be expected when games themselves and game-specific code paths are optimized pre and post launch. The driver developers optimize for the current version of a game, the game developers optimize for the current version of the drivers and when they both release new versions neither of them is able to account for the changes made by the other.

                John, being a game developer, thinks the solution is for driver developers to simply not bother and let game developers like himself take care of the optimization work. Being a developer myself I can understand his frustrations, but I still think that the best solution is for game and driver developers to co-operate as they're making their optimizations so they can take those into account as they optimize their own part of the equation. Either that or some agreed upon "do not optimize" flag that causes drivers not to trigger their game-specific optimizations and run trough the generic code paths.
                Last edited by L_A_G; 25 October 2017, 02:07 PM.

                Comment


                • #9
                  Originally posted by L_A_G View Post

                  John, being a game developer, thinks the solution is for driver developers to simply not bother and let game developers like himself take care of the optimization work.
                  This might be true for John and others like him but a lot of implementations don't conform to the spec. Do we really expect developers to optimise for specific drivers when many don't get that right?

                  Comment


                  • #10
                    I would agree with John if only <=DX11 & OGL existed, but now we have to types of graphics APIs. Programmers who want to make graphics without too much of a hassle can use DX11 & OGL, while everyone else uses the newer Vulkan and DX12

                    Comment

                    Working...
                    X