Radeon Gallium3D MSAA Performance (R300g)
Phoronix: Radeon Gallium3D MSAA Performance (R300g)
This week support for MSAA was finally added to the R300g driver so that the Radeon X1000 graphics cards and earlier can finally take advantage of anti-aliasing with this open-source Gallium3D driver. In this article are some benchmarks of the MSAA performance with a Radeon X1800XT, but even with this higher-end GPU when it comes to the R300g support coverage, the anti-aliasing performance isn't really usable.
Michael, something seems funny with the numbers, which are uncharacteristically low even with MSAA off. At first I thought it might have been from the difference between X1800 and X1900 (I see more x1900s which have ~3x the pixel shader power) but when I looked at older articles even the x1800 seemed to be much faster :
Previous article from Feb 2011 : http://www.phoronix.com/scan.php?pag..._compare&num=1
All numbers are MSAA off :
OpenArena v0.8.5 1920x1080 - this article 50.52, previous 118.00
World of Padman v1.2 1920x1080 - this article 17.83, previous 118.73
Urban Terror v4.1 1920x1080 - this article 21.20, previous 107.17
Vdrift 2010-06030 1920x1080 = this article 6.76, previous 27.67
There are all kinds of possible explanations -- regression in one of the components, new GL extensions causing apps to draw fancier stuff more slowly, compositor difference, some other hardware/software difference I missed -- but the performance drop seems relatively consistent across all the apps which seems odd.
Last edited by bridgman; 01-10-2013 at 03:23 PM.
I hope Mesa wasn't compiled with --enable-debug and the Ubuntu PPA with fresh Mesa wasn't used either, because the PPA is using the --enable-debug configure flag (and maybe some other PPAs as well, maybe even Ubuntu itself!!). The behavior of --enable-debug has been changed in Mesa. Newly the flag disables all gcc optimizations, which makes pretty much everything bloody slow.
As a question, why such a huge screen resolution for a card that is now approaching being seven years old? I still doubt the assertion that most people today have screens that large attached to their standard PCs, but back then it was even more doubtful. I know you want to test the hardware to it's fullest possible extent, but seeing that did kind of bother me.
I wondered about that as well, but I guess (a) a lot of people probably buy new displays just to get the screen space and the high res comes for free, (b) the cards actually ran pretty fast at 1920x1080 last year with many apps over 100 FPS. Agree that a lower res might be needed with MSAA cranked up.
Especially since the MSAA tests use tons of memory bandwidth compared to non-MSAA tests. The high-resolution just amplifies that effect, meaning MSAA might be a lot less trouble at a 1280x720 resolution that's probably more common at least for those older GPUs.
Originally Posted by Hamish Wilson
Note that in my PPA I added a patch to properly build with optimizations (disabling the recent changed behavior which is still being discussed on mesa-dev). I wasn't able to notice any measurable performance difference when enabling --enable-debug (tested some months ago), so I prefer to leave it enabled since it can be useful for debugging mesa as well as games problem. Also I prefer a game asserting and crashing than possibly locking up the system later on.
Originally Posted by marek
The test I did with mesa from my PPA are here and are completely different from what get on this article:
- no MSAA: 74.6 fps
- MSAA 2x: 61.1 fps
- MSAA 4x: 41.6 fps
- MSAA 6x: 29.7 fps
Last edited by oibaf; 01-11-2013 at 07:53 AM.
I suspect the slowness of Phoronix test here is due to this.
To me, there is a big difference in performance with --enable-debug, so big that I can't even use the flag for development. I supply my own gcc flags through the CFLAGS environment variable if I want some level of debugging with all gcc optimizations enabled.