Announcement

Collapse
No announcement yet.

GNOME's Window Rendering Culling Was Broken Leading To Wasted Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • GNOME's Window Rendering Culling Was Broken Leading To Wasted Performance

    Phoronix: GNOME's Window Rendering Culling Was Broken Leading To Wasted Performance

    It turns out for the GNOME 3.34 and 3.36 series, Mutter's window rendering culling code was broken and that led to extra rendering of windows not even visible... A fix is in the works and can lead to the performance doubling or more...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    On one hand this is very embarrassing. On the other hand I'm happy improvements are coming.

    Comment


    • #3
      waiting for gnome haters

      Comment


      • #4
        Originally posted by CochainComplex View Post
        waiting for gnome haters


        Look at him, what a jackass. I hate that guy.

        Comment


        • #5
          Originally posted by M@GOid View Post
          ....
          Q.E.D.

          Comment


          • #6
            Sometimes it's mind blowing that GPUs are able to draw complex 3D scenes but can struggle with drawing 10 windows on a desktop. At whatever resolution, with whatever bugs. The question shouldn't be being able to reach 60 fps but whether it's 5000 or 6000.

            Comment


            • #7
              Originally posted by eydee View Post
              Sometimes it's mind blowing that GPUs are able to draw complex 3D scenes but can struggle with drawing 10 windows on a desktop. At whatever resolution, with whatever bugs. The question shouldn't be being able to reach 60 fps but whether it's 5000 or 6000.
              It's cause they have to draw them pixel by pixel in real time, there are no premade textures and models. 2D has always been expensive and as time went on dedicated 2D fixed function hardware was removed from modern gpus. I think a cool idea would be to draw them with pixel shaders in the future or even better make them raytraced.

              Comment


              • #8
                Well I hope that there are tests in place for that culling functionality now.

                /me mutters darkly about untested code

                (aside: surely even a low-end integrated GPU can render hundreds of UI windows per frame these days? It's just a matter of memory bandwidth, the scene should be fairly simple compared to, e.g., a game, and the contents of each window should be a texture - window contents that don't change shouldn't need re-rendering. Obviously you might break down a window into its component panels to improve granularity/re-rendering hit on a change, but the principle still applies. No way you re-render everything within a window, every time, unless you're a video app or game)
                Last edited by sykobee; 22 June 2020, 08:04 AM.

                Comment


                • #9
                  Originally posted by eydee View Post
                  Sometimes it's mind blowing that GPUs are able to draw complex 3D scenes but can struggle with drawing 10 windows on a desktop. At whatever resolution, with whatever bugs. The question shouldn't be being able to reach 60 fps but whether it's 5000 or 6000.
                  I guess the difference to coherent 3D output is that the wm /compositor does not always know what the software does and when it is finished pushing the result whilst trying to maintain a fluent/smooth output. In contradiction a game engine syncs the task so that the output is pleasent and represented at once and not chopped....but if the devs are in a hurry it happens there as well ..some might remember the ac: unity horroshow

                  Last edited by CochainComplex; 22 June 2020, 08:06 AM.

                  Comment


                  • #10
                    Originally posted by eydee View Post
                    Sometimes it's mind blowing that GPUs are able to draw complex 3D scenes but can struggle with drawing 10 windows on a desktop.
                    I wonder if it is because GPUs have an architecture mainly intended for games and CAD (i.e where data can be retained) these days?

                    In many ways it is all due to the transfer of data from main memory to the GPU memory.
                    For example a 3D scene you upload to the GPU memory (i.e a large number of vertices). You do this once and it is there to be referenced as it is needed.

                    With windows displaying complex software (like a web browser) this data often needs to keep being sent (i.e as pixels) because it changes very often (the copy on the GPU is already out of date). This unfortunately results in blocking the pipeline. Especially now people have (wastefully IMO) extremely high resolution displays requiring massive amounts of pixels to be sent.

                    If people had simpler UI designs (i.e think boxy Motif) where they could be drawn mainly with instructions (i.e "draw 20x20 box") rather than a raster image, this could be much faster. But people in 2020 want their fancy "bling". This trend does seem to cycle. Perhaps in 2030 we will have less wasteful desktops? Who knows?

                    As it stands, many things can be retained on the GPU. But not enough unfortunately.
                    This ability to retain data also made it translate exceptionally well to remote UI systems. Unfortunately these are being neglected in these days of consumer electronics.
                    Last edited by kpedersen; 22 June 2020, 08:12 AM.

                    Comment

                    Working...
                    X