Morphological Anti-Aliasing With Mesa 8.0
Phoronix: Morphological Anti-Aliasing With Mesa 8.0
One of the less talked about features of Mesa 8.0 is its ability to handle MLAA, which is short for Morphological Anti-Aliasing. But how does MLAA on the open-source graphics drivers affect the OpenGL performance and is it worth it for boosting the image quality through this anti-aliasing technique? In this article are some benchmarks of MLAA under Mesa 8.0.
It's not supported yet is it?
Originally Posted by RealNC
For me the big problem with MLAA except for the performance drop is the anti-aliasing of 2d overlays. All text looks like comic sans basically.
When it comes to post processing AA, FXAA seems much better than MLAA anyway. But the "real deal" is MSAA. This really needs to be supported. Though with the huge performance issues of the whole open source graphics stack at the moment, it would probably kill even whatever little performance there is. Going from 80FPS to 70FPS on the binary blobs is not an issue, but going from 30FPS to 20FPS on the open drivers is a huge issue.
Note that FXAA, even though introduced by NVidia, is a generic AA method, just like MLAA, and works perfectly fine on AMD hardware too.
Last edited by RealNC; 02-14-2012 at 07:37 AM.
Looking at the screenshots, I'd prefer the mlaa_color option, and given that it has a less severe performance drop, that makes me even more likely to use it. Now we just need to optimize the crap out of both it and the rest of the stack.
Thanks Michael for running the tests. Could you post the full-resolution lossless pics of Nexuiz, for easier comparison?
It's also surprising how little performance impact there was going from 2 to 16. It's up to 8x more work per pixel.
That's the drawback of the color version. The depth one does not blur text.
In my opinion, FXAA blurred too much, while (Jimenez's) MLAA kept texture details much better. But your comparison is flawed, HardOCP compared AMD's MLAA to FXAA, which is a lot slower and worse in quality compared to Jimenez's algorithm.
I doubt there's much to optimize in the implementation itself, I believe the bottlenecks are in the gallium context switch. Possibly also some buffer switching could be done faster.
The performance results are very interesting
The way just turning it on really hurts performance, but then turning up the quality has no impact. Obviously something other than the shader computations is limiting the speed.
Didn't Jiminez throw away this algorithm and create some new AA technique recently, or update to a new version? Is there any plan to bring that in, or will Mesa stick with this code for a while?