Announcement

Collapse
No announcement yet.

Freedreno Gallium3D Adds NIR Compiler Support

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Freedreno Gallium3D Adds NIR Compiler Support

    Phoronix: Freedreno Gallium3D Adds NIR Compiler Support

    Following Intel's development of NIR as the new intermediate representation for Mesa and the Raspberry Pi graphics driver switching to NIR, the Freedreno Gallum3D driver as the open-source user-space GPU driver for Qualcomm Adreno now has NIR support too...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    You did miss UBO support: http://cgit.freedesktop.org/mesa/mes...46dad478d0680a
    Now imirkin is playing with the dolphin emulator on freedreno

    Comment


    • #3
      what benefit is there in using NIR over straight to Gallium 3D???

      Comment


      • #4
        I'm also curious. NIR is the third IR from Intel. The former was GLSL IR.

        For me this looks like a big mess with IRs. Mesa currently uses GLSL IR, Gallium3D uses TGSI, R600 uses LLVM IR and LunarGLASS uses Bottom IR. How many different IR do we need? And because of some reasons the driver have to convert at least once from one IR to another. IMHO this brings to much overhead which is negliglible on x86/x86_64 but not on ARM-CPUs with less performance.

        Comment


        • #5
          Originally posted by blubbaer View Post
          I'm also curious. NIR is the third IR from Intel. The former was GLSL IR.

          For me this looks like a big mess with IRs. Mesa currently uses GLSL IR, Gallium3D uses TGSI, R600 uses LLVM IR and LunarGLASS uses Bottom IR. How many different IR do we need? And because of some reasons the driver have to convert at least once from one IR to another. IMHO this brings to much overhead which is negliglible on x86/x86_64 but not on ARM-CPUs with less performance.
          One for shader language used. (GLSL, HLSL, C++, whatever)
          One for moving stuff around and preferable optimizations common to all GPUs. (SPIR-V, TGSI, GLSL IR, etc.)
          One for optimizations specific to GPU familly. (NIR, LLVM IR, etc.)

          First is needed to actually write anything down.
          Second allow for common front end and offline precomputations.
          Third actually allow for GPU specific stuff to happen.

          Since all gpu drivers are in fact independent, number of needs and approaches is quite diverse too!

          SPIR-V will clean a lot of that. (After all, single front end is very good idea)

          But 3rd class of IR wont go anywhere.
          GPUs from different families are fundamentally different, thus needing different optimizations, support different features, thus needing different tokens in IR, etc.

          Comment


          • #6
            Originally posted by blubbaer View Post
            I'm also curious. NIR is the third IR from Intel. The former was GLSL IR.

            For me this looks like a big mess with IRs. Mesa currently uses GLSL IR, Gallium3D uses TGSI, R600 uses LLVM IR and LunarGLASS uses Bottom IR. How many different IR do we need? And because of some reasons the driver have to convert at least once from one IR to another. IMHO this brings to much overhead which is negliglible on x86/x86_64 but not on ARM-CPUs with less performance.
            Don't forget sb (r600, if you aren't using llvm), codegen (nouveau), ir3 (freedreno), qir (vc4) used in the backend of the compiler.. :-P

            But seriously, there are different IR's that serve different functions at different stages of compiler.. tgsi (and spir-v) are not IR's in the same sense that NIR is. They are more of a flat in-memory exchange format, rather than something you can actually do optimization passes on.

            I expect there will be some perf improvement when they drop doing optimization at glsl level, or if we can ever (optionally) skip the pass into tgsi for gallium drivers. But conversion from one IR to another should probably mostly be linear time (O(n)), where other compiler steps are much more expensive. So it sounds like more of a problem to someone unfamiliar with the details than it actually is.

            Comment


            • #7
              Originally posted by degasus View Post
              not to mention MRT support which Ilia also pushed recently.. we are getting tantalizingly close to gl3 :-)

              Originally posted by degasus View Post
              Now imirkin is playing with the dolphin emulator on freedreno
              someone needs to post some screenshots of blob vs freedreno dolphin renders :-P

              Comment


              • #8
                Originally posted by przemoli View Post
                One for shader language used. (GLSL, HLSL, C++, whatever)
                One for moving stuff around and preferable optimizations common to all GPUs. (SPIR-V, TGSI, GLSL IR, etc.)
                One for optimizations specific to GPU familly. (NIR, LLVM IR, etc.)
                pretty close except spir-v and tgsi are not really so good for moving stuff around. They are more like serializable formats defined as part of an api/abi, for exchanging shaders. NIR (or LLVM for drivers that use that) will be where you do most of the common optimizations.

                Comment


                • #9
                  I'll try to give shed some light here to the best of my knowledge.

                  jakubo
                  what benefit is there in using NIR over straight to Gallium 3D???
                  Gallium3D is just a common API sorta, to get a common ground to share code between some of the drivers. It's IR is TGSI. This is just a common denominator that the gallium drivers consume, but all of them convert it to their own IR (as far as I know). I may be wrong, but I don't believe TGSI is ment to be anything more than a translation. It does not have the necessary stuff to do optimizations. Freedreno has ir3, vc4 has vc4, R600 has sb, radeonsi has LLVM, etc. NIR will give a common denominator among (possibly all?) drivers that will allow sharing of effort on optimizations, etc. It is designed from the bottom up with this in mind.

                  blubbaer
                  I'm also curious. NIR is the third IR from Intel. The former was GLSL IR.

                  For me this looks like a big mess with IRs. Mesa currently uses GLSL IR, Gallium3D uses TGSI, R600 uses LLVM IR and LunarGLASS uses Bottom IR. How many different IR do we need? And because of some reasons the driver have to convert at least once from one IR to another. IMHO this brings to much overhead which is negliglible on x86/x86_64 but not on ARM-CPUs with less performance.
                  GLSL IR, as far as I have understood, will still be around. The reason is quite simple. GLSL IR has an AST used for representing the shaders that are written in GLSL. It is very close to how the GLSL program looks, and has GLSL's functions. This makes it quite easy to expand the support to newer versions of GLSL. This however, does not necessarily correspond well with the underlying hardware. GLSL IR is not really a good design to do optimizations on, so we need something else. All the backends have their IR that matches their hardware. Somewhere in between there is the need for an IR that has optimization possibilities, but at the same time can be shared among drivers. That's where NIR comes in. As far as I understand there is the plan to some day turn of a lot of the optimizations in GLSL IR. This will free up a lot of resources because GLSL IR is setup like a tree, and so is very cache unfriendly to traverse. It will also be a lot better to do these in NIR, as it has SSA which is what most compiler passes nowadays are written against. And not to forget that sometimes it is just simpler to start anew than to modify something to kinda make it work. Connor wrote a SSA pass for GLSL IR, but it never got mainlined. Instead he went on to write NIR.

                  So for intel there needs to be conversion from GLSL IR -> NIR -> i965 backend ir. This however, is not that bad. Translating from one IR to another is mostly just one traversal over the instructions ( O(n) ). An optimization pass may take O( n*logn ), or O( n? ). So the time spent translating is small. I believen apple is using LLVM for javascript in safari. If i remember correctly they have a total of 5 ir's from javascript to llvm, each with it's own set of optimizations. So three or four translations for a shader that is compiled only once is not that bad really.

                  Disclaimer: I may be completely of track here, but I believe this is at least not a straight out lie =)

                  Comment


                  • #10
                    Thanks a lot for the explanations.

                    It seems that I'm stucked at the classical compiler design with one IR. I see the point why each different IR is needed, but it still looks kinda bloated to me.

                    Comment

                    Working...
                    X