Announcement

Collapse
No announcement yet.

Intel Mesa Developers Working To Expose 16x MSAA For Skylake

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Intel Mesa Developers Working To Expose 16x MSAA For Skylake

    Phoronix: Intel Mesa Developers Working To Expose 16x MSAA For Skylake

    With the latest-generation graphics on Skylake processors, open-source Intel developers are working right now to expose 16x multi-sample anti-aliasing (MSAA) inside the Linux Mesa driver...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    I have two questions about AA. Firstly, is it possible to use igpu for postprocessing? How about using Smaa on linux?

    Comment


    • #3
      Originally posted by ShFil View Post
      Firstly, is it possible to use igpu for postprocessing?
      Anti-Aliasing isn't done in post-processing.

      (At some point in the distant past, GeForce cards by Nvidia used to do a hack to circumvent their lack of AA: render the scene at a higher resolution then reduce the ouput image to final resolution while applying some e.g. bilinear filtering.
      This slightly reduced jagged lines, while at the same bleeding blur allover and being mathematically-provable waste of ressource {that's why, 3Dfx used a rotated grid AA, and ATI/AMD use a pseudo random pattern}
      But hey, it helped Nvidia add a bullet point that they were missing)

      The way Multi-Sampled Anti-Aliasing is done in modern graphic cards:
      - for each final pixel on the screen, always only evaluate 1 single time the texture, lighting, etc. thus pixel shaders and other such expensive computation aren't executed more than necessary.
      - for pixels that might get problematic (i.e.: near the edge of a triangle) try to compute how much the shape cover the pixel (e.g.: by sampling the depth buffer and/or stencil buffer more than once). Thus you either get that the pixel is 100% covered by the shape and thus is 100% opaque and gets rendered as any other pixel (just draw the output of the pixel shaders). Or you get that the pixel is only partially covered by the shape, the pixel is only partially opaque, and you must mix its color with the output of all other shapes partially covering the pixel.
      It's an over simplification, but you get the general idea : not much post-processing, instead extra Z-buffer computation during main processing/rendering to better know how to mix the pixels.

      There are 2 gains:
      - compared to classical super-sampling, you try to make as few extra computations as possible (always only compute the pixel shader once ; you might try to avoid computing extra Z-buffer if unneeded).
      - compared to the old GeForce hack (render at higher resolution) you can try to spread the sub-samples more intelligently (back then 3Dfx used a rotated grid, and ATI/AMD used a pseudo random pattern in order to spread the sub-sample as much as possible vertically and horizontally and thus get the most results out of the extra computation)

      Basically, MSAA is the Z-Buffer equivalent of what anisotropic filtering does to textures (get more texture sub-sample at a higher resolution per final screen pixel, in order to compensate for artifacts due to perspective distortion).

      (And basically, because it was operating on both Z-Buffer *AND* textures at the same time, 3DFx's FSAA/RGSS back then was working as a bizarre combined MSAA + Anisotropic filtering. Modern pipeline in modern graphic cards seprate the first question {"Is the pixel visible?"} and the second one {"What's its actual colour?"} and thus anything in less than the last 10 years has a separate control for MSAA and anisotropic rendering).

      Comment

      Working...
      X