Actually, it tends to go in spurts and bursts, especially with my coding style. :3
Originally Posted by drag
Meandering a little way off-topic...
I think I've stated it elsewhere, but on the mplayerhq.hu homepage they give you instructions on how to compile the mplayer-mt. It lets me play 1080p on an Athlon X2 4600+ CPU with Xv. While it's not GPU acceleration, it's a heck of a lot better than nothing. My MythTV system with an HD3200 is awesome with the open source drivers. The only thing that could make it better is if I could start doing some 3D gaming on it as well.
Originally Posted by drag
I think the "ATI/AMD" remark at the start of the article was directed at the post-buyout company - not the pure "ATi" drivers.
I used to have a 9800 Pro, was VERY disappointed that the Linux drivers sucked so badly (and the installer back then was hell).
With Nvidia I've had fine binary drivers - although I wish they'd grow up and help out the Noauvouvouioviuoi project with hardware docs at least (ermm, what part of the hardware command structure is DRM'd then Nvidia?).
But, anyhoo, VERY nice article.
Correct performance stats (nice for a change), wasn't condeming of the performance, and explained the various aspects of KMS/DRI/gallium/etc very well.
Phoronix, give this guy more stuff to write about!!
Can anyone point me to some articles/discussions for background on TTM/GEM/KMS? This whole development direction seems to me to be contrary to good design principles - there are more moving parts, and putting more into the kernel means more context switches required to get any useful work done. Back when I worked on X (I wrote the X11R1 port for Apollo DOMAIN/OS) we would have had our heads chewed off for trying to push any of this work into the kernel...
Most of the real discussion was a few years ago, but the key point is that there were some real serious problems with the current architecture (possibly worse than the ones you were dealing with) which had to be addressed.
The main problems were :
- multiple graphics drivers, some in user space, and some in the kernel, over-writing each others settings during common use cases
- inability to share memory between 2D and 3D drivers, which made any kind of desktop composition inefficient and slow
Both the 2D and 3D drivers need context switches in order to access the hardware anyways, since the direct rendering architecture uses drm to arbitrate between the ddx and mesa drivers, and to manage the shared DMA buffers (aka ring buffer) used to feed commands and data into the graphics processors. I don't think these changes really introduce more context switches as much as change the dividing line between user and kernel responsibilities.
I'm a bit rusty on the history (my hands-on X experience was before X11), but my understanding is that user modesetting is a relatively new addition to X, and that the KMS initiative is arguably going back to the way modesetting was handled in earlier versions of X which were presumably built on existing kernel drivers. I haven't had much luck finding online references to support this, but I have been told this by a number of people who have worked on X for a very long time.
If you're saying that the DRI architecture is fundamentally flawed and that all graphics should go through a single userspace stack (presumably in the X server) then that's a different discussion of course.
Jesse Barnes wrote a good summary of the rationale for moving modesetting into the kernel, and there is a good discussion (plus the odd rant ) in the subsequent comments : http://kerneltrap.org/node/8242
Thomas's original TTM proposal is a pretty good summary of the goals related to memory management : http://www.tungstengraphics.com/mm.pdf
Last edited by bridgman; 05-13-2009 at 09:02 PM.
Well, I too used a 9800 np and it was to say the least unstable. I would get driver crashes by just by hitting Ctrl+AlT+F1 or any other f key. Also there were quite a few times when video applications would give me a black screen. Not including all that the community usually had to patch the drivers themselves because a lot of the time they would not compile. They also had issues with VIA motherboards, so the drivers wouldn't even load without some secret patch(seriously I had to look for 3-5 hours before I could find a patch to my motherboard kt333).
Originally Posted by Melcar
I would have to disagree with the author.
I have had a lot of problems with my ATI card on Linux.
In fact it has caused me many system re-installs.
I have the 4870 X 2 and no 3d support on Jaunty Jackalope. I tried installing the Catalyst 9.4 and when I reboot my machine it locks up and the colors are all messed up. I wish I would have bought a Nvidia card. I used to use windows and the Vista drivers were not much better.
Thanks, those references were pretty interesting.
Originally Posted by bridgman
Yeah, I guess a lot of things were simpler on the Apollo workstations; they had no hardware text mode, they really only had one supported graphics mode per given machine configuration. The one wrinkle is that they already had their own native graphics/windowing system that was not related to X. There were two porting efforts going on, one to simply layer the X APIs on top of the native APIs, and one to drive the hardware directly. I think ultimately we had to accept the overhead and just layered on top of the Apollo APIs, to allow native apps to continue to run alongside X apps. Otherwise we would have had the same issues - multiple drivers talking to the same graphics hardware...
As for DRI ... we obviously wouldn't need to fret over "redirected direct-rendering" if everything was going through the X server...
A few people have reported problems with the X2 boards and Jaunty, although they didn't show up in our "early look" testing on server 1.6. Did you see the same problems with Intrepid ? If not, it might be worth staying on 8.10 until we finish QA testing & bug fixing on Jaunty, and announce support in the release notes.
Originally Posted by cliff
Last edited by bridgman; 05-14-2009 at 12:03 AM.