Announcement

Collapse
No announcement yet.

The Raspberry Pi Gallium3D Driver Has Made Much Progress In Less Than A Year

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • The Raspberry Pi Gallium3D Driver Has Made Much Progress In Less Than A Year

    Phoronix: The Raspberry Pi Gallium3D Driver Has Made Much Progress In Less Than A Year

    It was just last June that Eric Anholt left Intel for Broadcom to focus on creating the Broadcom VC4 open-source graphics driver stack for the Raspberry Pi to have a new DRM/KMS driver and a Gallium3D driver. In less than one year, he's made a lot of progress...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    I greatly value what Eric has been doing for the VC4 over the past year (God knows it's been much needed too), but I can't for the life of me understand why he's fallen in the same pitfall the entire GL ARB has been for generations: supporting (legacy) desktop features useful to a marginalized percentage of the software out there.

    This is a GLES part, short and simple, just let is shine as such, don't turn it into some crappy desktop part. Just save yourself a metric ton of time from the GL side, and polish the GLES pipeline by tagging as many milestones as possible, letting the community test each milestone for you.

    Comment


    • #3
      Originally posted by darkblu View Post
      I greatly value what Eric has been doing for the VC4 over the past year (God knows it's been much needed too), but I can't for the life of me understand why he's fallen in the same pitfall the entire GL ARB has been for generations: supporting (legacy) desktop features useful to a marginalized percentage of the software out there.
      Maybe because RPi is kind a legacy hardware in terms of performance? From my experience with Pi I can't say that I'll ever want to use modern software on it while software from 2004-2006 era is more or less possible to use.

      Comment


      • #4
        Originally posted by darkblu View Post
        I greatly value what Eric has been doing for the VC4 over the past year (God knows it's been much needed too), but I can't for the life of me understand why he's fallen in the same pitfall the entire GL ARB has been for generations: supporting (legacy) desktop features useful to a marginalized percentage of the software out there.

        This is a GLES part, short and simple, just let is shine as such, don't turn it into some crappy desktop part. Just save yourself a metric ton of time from the GL side, and polish the GLES pipeline by tagging as many milestones as possible, letting the community test each milestone for you.
        (a) supporting gl in driver by emulating a couple things is *much* simpler than porting all the random gl apps to gles.. it isn't like w/ closed src drivers, where we need to write the entire gl stack per gpu vendor.. and (b) w/ mesa and gallium we are managing to handle a fair bit of that gl emulation in common/shared code.. some bits I added initially for freedreno, some parts eric added or improved, and so on.

        Comment


        • #5
          Originally posted by robclark View Post
          (a) supporting gl in driver by emulating a couple things is *much* simpler than porting all the random gl apps to gles.. it isn't like w/ closed src drivers, where we need to write the entire gl stack per gpu vendor.. and (b) w/ mesa and gallium we are managing to handle a fair bit of that gl emulation in common/shared code.. some bits I added initially for freedreno, some parts eric added or improved, and so on.
          Re (a), I don't share that sentiment. Surely it is simpler in terms of build efforts to provide the API hooks and old GL code to all of a sudden start compiling, but I can't think of a legacy GL app I've come across which could not be ported to GLES in matter of weeks, if not days, by a single experienced developer. And how efficiently is the GLES featureset going to be used by such GL-over-GLES emulation is another matter. Take the rudimentary example of quads - why let the driver build indices for quads at each array/buffer emit when the app can provide the correct data structures at the same of less effort? The fact the driver can shuffle data from foreign formats does not mean it's the right place for the shuffle to happen. This is why GLES was invented - to solve exactly such misplaced efforts in the pipeline! At the end of the day you want GL apps which are of any value to be actually properly ported to GLES, and not have their API calls run-time translated not-in-small-part by Pi's "muscular" CPU.

          Re (b) that surely saves time. But the effort is of questionable benefits.

          See, I'm not trying to be a dick here and poopoo on Eric's efforts (actually I have the fingers on both my hands crossed for him). But the Pi has been stuck with its less-than-perfect GLES2 stack for, what, 2 years now? During that time, people have tried to port some nice GLES apps but eventually given up due to the state of the GLES stack on the platform (glsl compiler giving up under register pressure, anybody?) Now we get somebody who can mend the GLES situation, and instead we get their efforts partially diverted toward things that people who literally learned GLES yesterday can solve on a per-application level at reasonable effort and with a potentially better outcome. I hope at least Eric is having fun with that, because I surely am not while watching from the sidelines.

          Comment


          • #6
            Originally posted by darkblu View Post
            Re (a), I don't share that sentiment. Surely it is simpler in terms of build efforts to provide the API hooks and old GL code to all of a sudden start compiling, but I can't think of a legacy GL app I've come across which could not be ported to GLES in matter of weeks, if not days, by a single experienced developer. And how efficiently is the GLES featureset going to be used by such GL-over-GLES emulation is another matter. Take the rudimentary example of quads - why let the driver build indices for quads at each array/buffer emit when the app can provide the correct data structures at the same of less effort? The fact the driver can shuffle data from foreign formats does not mean it's the right place for the shuffle to happen. This is why GLES was invented - to solve exactly such misplaced efforts in the pipeline! At the end of the day you want GL apps which are of any value to be actually properly ported to GLES, and not have their API calls run-time translated not-in-small-part by Pi's "muscular" CPU.
            Then why hasn't someone ported everything already? Otoh, stuff that does have gles support, it tends to bitrot because they aren't the code paths that upstream (which is mostly desktop) tests.

            At any rate, most of the things we have to emulate, in the common form that they are used, are not really such a big deal. Quads for example, typically used by window mgrs, and few other places, w/ low # of vertices. And frequently not already using an index buffer. So it is not a huge index buffer to generate to remap the quads to tris.

            Another example, GL_CLAMP.. I have to emulate in shader. I think eric does too.. but intel does as well. We have to regenerate shader based on texture clamp state. But in practice it is the same state each time so in the common case we end up generating just a single (or maybe small #) of variants. Few extra instructions in the shader, is lost in the noise. Another one, two-sided-color.. I emulate w/ shader variants in exactly the same way that radeon does (and intel, iirc) ;-)

            There are probably a few things that won't be worth emulating.. meh, the point isn't to get certified gl drivers on these platforms, but just something that works pretty well for all the common cases.

            Re (b) that surely saves time. But the effort is of questionable benefits.

            See, I'm not trying to be a dick here and poopoo on Eric's efforts (actually I have the fingers on both my hands crossed for him). But the Pi has been stuck with its less-than-perfect GLES2 stack for, what, 2 years now? During that time, people have tried to port some nice GLES apps but eventually given up due to the state of the GLES stack on the platform (glsl compiler giving up under register pressure, anybody?) Now we get somebody who can mend the GLES situation, and instead we get their efforts partially diverted toward things that people who literally learned GLES yesterday can solve on a per-application level at reasonable effort and with a potentially better outcome. I hope at least Eric is having fun with that, because I surely am not while watching from the sidelines.
            Well, to start with, feel free to start porting piglit tests so we have a healthy gles2 based test suite then :-)

            Until then, it is worth spending a few minutes here and there to emulate some things (especially when there are gallium helpers for it already) if for no other reason than being able to get good piglit test coverage.

            Trust me, there are some fun challenges for that driver (compiler, cmdstream validation, etc), which eric I'm sure spends most of his time on. I really don't think gl support is delaying things, at least not more than the corresponding benefit it brings (piglit test coverage, etc)

            Comment


            • #7
              Originally posted by darkblu View Post
              At the end of the day you want GL apps which are of any value to be actually properly ported to GLES, and not have their API calls run-time translated not-in-small-part by Pi's "muscular" CPU.
              Why? As Rob implied, everyone (tm) has GL, while nobody (tm) has GLES. Pi users are a tiny fraction of desktop users.

              Comment


              • #8
                Originally posted by robclark View Post
                Then why hasn't someone ported everything already? Otoh, stuff that does have gles support, it tends to bitrot because they aren't the code paths that upstream (which is mostly desktop) tests.
                Things don't get ported and/or bitrot mainly due to lack of collective interest. An anecdote: some years ago when some guys needed GLQuake on their fresh-out-of-the-oven GLES platform they ported said game to GLES2 in about a week. While also learning GLES2 in the process. They were just interested enough.

                At any rate, most of the things we have to emulate, in the common form that they are used, are not really such a big deal. Quads for example, typically used by window mgrs, and few other places, w/ low # of vertices. And frequently not already using an index buffer. So it is not a huge index buffer to generate to remap the quads to tris.
                I was giving an example about how the the apps would handle a GL feature better. I'm sure most use cases span quads over windows, screens, and similar minimalistic cases.

                Another example, GL_CLAMP.. I have to emulate in shader. I think eric does too.. but intel does as well. We have to regenerate shader based on texture clamp state. But in practice it is the same state each time so in the common case we end up generating just a single (or maybe small #) of variants. Few extra instructions in the shader, is lost in the noise. Another one, two-sided-color.. I emulate w/ shader variants in exactly the same way that radeon does (and intel, iirc) ;-)

                There are probably a few things that won't be worth emulating.. meh, the point isn't to get certified gl drivers on these platforms, but just something that works pretty well for all the common cases.
                I understand nobody is aiming at certified GL. But even the things you're doing now, most seasoned GLES devs have long ago come to terms with. You're not really helping them with those GL hooks, you're mostly helping GL packet maintainers. The fact I cannot write >30 lines of glsl on the platform without bumping into an obstacle is a much greater detriment to me than any two-sided, plane-clipped, clamped setup some codebase might cook up.

                Well, to start with, feel free to start porting piglit tests so we have a healthy gles2 based test suite then :-)
                Ironically enough, my Pi got shelved when my personal GLES2 test suite (which originates from another mobile part) hit a major glsl compiler bug on the 2nd or 3rd unit test of the lot.. After I had already come to terms with missing essentials from the stack (depth textures and fences come to mind). So I had to put the entire operation on hold in hopes of better times. .. Anyhow, if piglit is a requirement, then so be it. I'll give Eric's tree a spin one of these weekends. BTW, when is control flow expected to show up (in case Eric is around)?

                Comment


                • #9
                  Originally posted by darkblu View Post
                  Things don't get ported and/or bitrot mainly due to lack of collective interest. An anecdote: some years ago when some guys needed GLQuake on their fresh-out-of-the-oven GLES platform they ported said game to GLES2 in about a week. While also learning GLES2 in the process. They were just interested enough.
                  The lack of collective interest is the issue. Meanwhile we still need games and other interesting apps to test with ;-)

                  Originally posted by darkblu View Post
                  I was giving an example about how the the apps would handle a GL feature better. I'm sure most use cases span quads over windows, screens, and similar minimalistic cases.
                  I'm not arguing that there aren't cases where app could make better decisions.. or trying to dissuade anyone from porting something to gles. It's just that there are still a whole lot of gl1/gl2 games, apps, piglit tests, etc out there. And it is silly to ignore them, especially when it is relatively trivial to emulate a couple gl features to have enough desktop gl to get 'em running.. With gallium, it's not like we have to write an entire other driver for gl vs gles.

                  Comment


                  • #10
                    Originally posted by darkblu View Post
                    I greatly value what Eric has been doing for the VC4 over the past year (God knows it's been much needed too), but I can't for the life of me understand why he's fallen in the same pitfall the entire GL ARB has been for generations: supporting (legacy) desktop features useful to a marginalized percentage of the software out there.

                    This is a GLES part, short and simple, just let is shine as such, don't turn it into some crappy desktop part. Just save yourself a metric ton of time from the GL side, and polish the GLES pipeline by tagging as many milestones as possible, letting the community test each milestone for you.
                    This is kind of stupid. How many games in the desktop Linux world use GLES? None. How many use old versions of GL? A lot. OpenArena, SuperTuxKart, Minetest, Xonotic/Nexuiz, Cube/Sauerbraten, OpenJK, etc. are just ones I can think of off the top of my head that would not work without significant rewriting on GLES but would work great on GL1.x or GL2.x. Most compositors also need GL. If anything needs to go away, it's GLES. It serves no purpose other than fragmenting the GL API world that is already so cemented around standard OpenGL. If you were writing an Android driver, then sure, polish GLES, but on desktop Linux GLES has very little presence at all. I have a Qualcomm box with freedreno, and it's a night and day difference compared to GLES-only boxes. Games work, compositing works. it feels like a proper desktop. Every other ARM board I've used feels like a potato in comparison due to lack of basic GL support.

                    Comment

                    Working...
                    X