Announcement

Collapse
No announcement yet.

Linux Anti-Aliasing Benchmarks With The GeForce GTX 980

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Linux Anti-Aliasing Benchmarks With The GeForce GTX 980

    Phoronix: Linux Anti-Aliasing Benchmarks With The GeForce GTX 980

    The latest Linux graphics benchmarks I ran from the high-end Maxwell GeForce GTX 980 graphics card were some anti-aliasing tests...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    What's up with CS:GO?

    Comment


    • #3
      That's very strange, my GTX 680 running 331.38 supports 32x(8xMS, 24x CS), 16x (8xMS, 8xCS), 16x(4xSS, 4xMS), 16x(4xMS, 12xCS) and many more. Might this vary from driver to driver?

      Comment


      • #4
        FXAA on Linux was absolutely awful the last time I used it. It just made EVERYTHING fuzzy.

        Comment


        • #5
          Originally posted by LinuxID10T View Post
          FXAA on Linux was absolutely awful the last time I used it. It just made EVERYTHING fuzzy.
          Agreed, but it's the same on Windows really.

          At first FXAA seemed like a very good final AA solution. But I've grown to dislike it. It is an ugly hack compared to MSAA IMO. It has some advantages (simple 'catch-all' for anything noisy, less memory usage than MSAA), but if you add things up it ends up about even with MSAA. I guess it also varies per game, in action packed games with wild camera motion and cluttery scenes you are less likely to notice the shortcomings, but for other games with simpler, cleaner geometry, it is poorly.

          The main issue with FXAA is the lack of sub-pixel motion, leading to jitter and temporal noise. For example, thin contrasting lines (like powerlines silhouetted against a sky) still look terrible. And I really really like alpha-to-coverage transparancy (Alpha2Mask in D3D), which IMO is a must for decent looking undergrowth (thin grass strands) or scenes with dense foliage. Some FXAA games seem to use 2x MSAA + FXAA, but for typical alpha-to-coverage usage, 2x MSAA isn't sufficient (and I suspect FXAA slapped on top would turn the resulting dither into unacceptable flicker/noise). What I really hate about FXAA is that, for the games I make/play (which involve long distance gunfights, aka 'hunting pixels on the horizon') is that the lack of sub-pixel detail makes enemy silhouettes less recognisible, and jitter around less predictable when they move.

          It seems many graphics programmers seem to underestimate the enormous amount of information that can be stored between 4 neighboring pixels of three consecutive frames rendered.
          Last edited by Remdul; 11 October 2014, 02:14 PM.

          Comment


          • #6
            Is it weird that with a large number of games I get roughly the same performance with Edge Detect and SSAA even if the framerate wasn't very high without the added AA? I have a Vapor-X 2GB 5870. It seems that games which are shader intensive, have poor shader optimization or outdated shaders are the only ones which get a performance impact.
            Last edited by Styromaniac; 11 October 2014, 03:21 PM.

            Comment


            • #7
              Originally posted by Remdul View Post
              Agreed, but it's the same on Windows really.

              At first FXAA seemed like a very good final AA solution. But I've grown to dislike it. It is an ugly hack compared to MSAA IMO. It has some advantages (simple 'catch-all' for anything noisy, less memory usage than MSAA), but if you add things up it ends up about even with MSAA. I guess it also varies per game, in action packed games with wild camera motion and cluttery scenes you are less likely to notice the shortcomings, but for other games with simpler, cleaner geometry, it is poorly.

              The main issue with FXAA is the lack of sub-pixel motion, leading to jitter and temporal noise. For example, thin contrasting lines (like powerlines silhouetted against a sky) still look terrible. And I really really like alpha-to-coverage transparancy (Alpha2Mask in D3D), which IMO is a must for decent looking undergrowth (thin grass strands) or scenes with dense foliage. Some FXAA games seem to use 2x MSAA + FXAA, but for typical alpha-to-coverage usage, 2x MSAA isn't sufficient (and I suspect FXAA slapped on top would turn the resulting dither into unacceptable flicker/noise). What I really hate about FXAA is that, for the games I make/play (which involve long distance gunfights, aka 'hunting pixels on the horizon') is that the lack of sub-pixel detail makes enemy silhouettes less recognisible, and jitter around less predictable when they move.

              It seems many graphics programmers seem to underestimate the enormous amount of information that can be stored between 4 neighboring pixels of three consecutive frames rendered.
              Simply stated; all post-processing AA including FXAA sucks. FXAA, TXAA, and variants from AMD use different post-processing filters. Essentially it's some kind of blur filter similar to gaussian blur, and the problem is there is not generated any additional subpixel data. When there is no additional data, it's impossible to create a better picture, and can only attemt to hide the ugliness. None of these AA variants can be compared to other types of AA.

              Maxwell does however offer a couple of new interesting options on the scale of "proper" AA methods. Dynamic Super Resolution is a variant of SSAA with an different averaging filter. We'll have to wait and see if this actually makes it better or worse than current SSAA. The other one is Multi Framed AA, which reuses some sample data from the previous frames. This technique might suffer from interesting artifacts under certain conditions, such as rapidly changing camera movements. I do however imagine it would work well for relatively static camera positions, e.g. games with top-down view. Obviously MFAA is inferior to MSAA, and is only an appropriate choice when MSAA is not an option performance wise.

              Originally posted by Styromaniac View Post
              Is it weird that with a large number of games I get roughly the same performance with Edge Detect and SSAA even if the framerate wasn't very high without the added AA? I have a Vapor-X 2GB 5870. It seems that games which are shader intensive, have poor shader optimization or outdated shaders are the only ones which get a performance impact.
              If a game gets roughly the same performance with and without MSAA or even SSAA then there is something throttling the performance, either something like vsync or an in-game performance limit, or just enormous CPU overhead/bottleneck.

              If you get a medium performance impact with MSAA and a huge impact with SSAA then it's actually evidence of the game using the GPU efficiently. When applying AA, the vertex load and CPU overhead should remain the same, while the fragment processing increases. 4x MSAA gives 4x fragment shader load + a minor overhead, 2x SSAA also gives 4x fragment shader load (2x2 subpixels), 4xSSAA gives a tremendous 16x fragment shader load.
              An hypothetical example:
              0xAA: ~2 ms transfer, sync, etc., 3 ms vertex processing, 3 ms fragment processing: 8 ms total = 125 FPS
              2xSSAA: ~2 ms transfer, sync, etc., 3 ms vertex processing, 12ms fragment processing: 17 ms total = 58.8 FPS
              4xSSAA: ~2 ms transfer, sync, etc., 3 ms vertex processing, 48ms fragment processing: 53 ms total = 18.9 FPS
              4xMSAA: ~2 ms transfer, sync, etc., 3 ms vertex processing, 12ms fragment processing: 17 ms total = 58.8 FPS
              These are just approximations, but you get the point.

              Comment

              Working...
              X