ATI R300 Mesa, Gallium3D Compared To Catalyst
Phoronix: ATI R300 Mesa, Gallium3D Compared To Catalyst
Last quarter we compared the Catalyst and Mesa driver performance using an ATI Radeon HD 4830 graphics card, compared the Gallium3D and classic Mesa drivers for ATI Radeon X1000 series hardware, and ultimately found that even with the ATI R500 class graphics cards the open-source driver is still playing catch-up to AMD's proprietary Catalyst Linux driver. In this article we have similar tests to show the performance disparity with ATI's much older R300 class hardware. Even with Radeon hardware that has had open-source support much longer, their drivers are not nearly as mature as an outdated Catalyst driver in the same configuration.
Beside not doing better than the tradional mesa stack, as I have told earlier the Gallium 3D is still really unstable, causing lots of freezes (almost daily) with my Mobility Radeon x700 (RV410). Since I rolled back to mesa 7.7, no freezes at all for the last 15 days (uptime). To me, it seems quite obvious which one to use right now...
Weird how only OA regressed from playable to not. Is this the 50% drop that we also see in glxgears, from the extra copy of DRI2?
My recollection was that one of the attractions of the 300g driver was that it could support a number of apps which did not run on the 300 classic mesa driver. I understand that's hard to show on benchmarks (unless 300c gets a flatline) but worth mentioning.
Its been said before, but man, we have really got to get some non-FPS, and some non-video game 3d benchmarks.
Not that the results of a few different game engines aren't interesting, but the story that we get from these data is pretty narrow.
So to make it short if you want 3D, you must want FGLRX. If you want FGLRX, it means that you want NVIDIA.
On the contrary, if you don't want 3D, you want Mesa(and eventually one day Gallium). If you want Mesa, you surely don't want NVIDIA's blob.
Or did I say something wrong?
I was thinking exactly the same thing.
Originally Posted by bulletxt
It's sad though that 3D performance on the opensource side is so abysmal... I understand with nVidia, a company who dislikes opensource... but AMD has been thoroughly involved in opensource drivers for years, yet results are far to come (in 3d field, I mean).
I know that FGLRX has tons of IP they cannot expose, I know it shares lots of code with the windows drivers... but come on, a question rises to me: "is oss ati-driver wrong from the basement?"
I'm not being polemic. Just asking an opinion...
I know that it's code under development, I know that efforts are put in giving it features and stable 2D first. But I'm asking to coders involved: is there room for performance improvement once 2D stability is reached and gallium3D has matured? I mean: now blobs are 3 to 5 times faster! Can we expect oss drivers to be let's say 60% as fast as fglrx in teh next 18 months? or is it wishful thinking?
There is, namely:
Originally Posted by TeoLinuX
- color tiling (implemented but seems to be disabled in this article, WHY???)
- compiler optimizations
- DRI2 Swap
- reducing CPU overhead everywhere
Michael should really be running xf86-video-ati and libdrm from git and latest kernel-rcX, otherwise the perfomance will continue to suck in his articles.
There are performance gains and there will be more as time goes by, but the majority of them will be disabled with old kernels and DDX drivers.
In other words, another crappy article.
I don't understand all these complaints. It is not like "Hey the driver for card x sucks compared to driver y"... It is more like "The architecture for drivers to fit in isn't even done yet.".
The beauty of Gallium is that a shitload of stuff that was normaly needed to be done for each card is now card and driver agnostic [is this correct english?].
I'd say use whatever works for you now because the foundation for the drivers to implement themselves in isn't finnished and is the primary concern today. Hold your cool and massive respect to AMD for their part of the support for laying this new architecture and ofcourse also everyone else.
Using the latest kernel lets say 18.104.22.168 and xorg edgers (ubuntu PPA) really leads to be totally desync with the latest development features / performance of xorg??