Thanks for being so patient/think_skinned Matthew.
Regarding powerplay, I basically understand what you're saying about the power management, but I'm sure I'm among "most users" in that it is a minimal understanding. Probably this will be an illustration of that, but the way I've been using tvout since I started using Linux, I only ever use 1 display at a time, there's no output on the TV when I'm not using it, and when I am using it, my monitor shuts off, until I finish using my TV and it's empty Xserver is killed. The TV goes on VT8, my desktop session stays on the ususal vt7, and neither is ever active at the same time. So I guess what I'm saying is, why should the mere presence of a connected svideo cable (with no TV on(the video card doesn't know if the TV is on or off though does it?), and no active X displays being transmitted through the cable) preclude the use of the power management features? I have a really low opinion of Nvidia, but they can manage this just fine (though they can't seem to manage putting my monitor into standby mode at all). *shrugs* Going to want an 'after-market' cooling solution eventually anyway I suppose, but so much for the power savings of a 45W CPU. *shrugs again*
Is that in reference to multiple monitors/screens with different resolutions? Or multiple monitors with separate Xservers running on them? If it's either I'm afraid that just isn't true (maybe you just meant with ATI hardware /fglrx?), I was doing both of those things long before RANDR1.2 just using X.org's builtin functionality (that Nvidia, as lacklustre as their driver is, supports just fine). I'm not sure that is what you meant however.Only since XOrg picked up RANDR1.2 does this capability exists.
I don't have any problem with fglrx having it's own set of config tools/files, and I certainly agree that being able to manage many/all X/fglrx settings on the fly that otherwise would require restarting X is a good thing (a great thing). I just think it's folly to entirely abandon basically all support for xorg.conf and also to simply say that basic features/functionality(eg: multiple independant displays/Xservers) of X.org aren't supported by fglrx, a driver for X.org. I think at the least xorg.conf should be parsed at startup and whatever configuration is present (if any) should be used. fglrx/cccle/aticonfig can/should take the xorg.conf and use it as the basis for their runtime config changes. And if X.org supports it, so should fglrx. I know X.org can now do pretty much entirely without any content in xorg.conf and function just fine, but if you decide to use it, X.org still reads and uses the configuration you set up. Can one even run "X :1" to launch a new Xserver on VT8, using the same single screen, with fglrx? Can you do this? Unless the answer to either is yes, those are rhetorical questions that hopefully illustrate my point about supporting X features.
As for documentation, take a look at Nvidia's for an example of what ATI is sorely lacking. I realize a lot of their readme details xorg.conf options, and that's not what you guys/gals want to use, but if you had a document that describes for us simple minded users what can be done and how, even with your anti-X.org methods, I would not be complaining about this at all, I would have just studied the document. "aticonfig --help" may be understandable to you (and it is somewhat to me), but it's really lacking overall, as far as telling the user what options are needed to accomplish what use-case scenario. And if you look here, you'll notice how they have a README for every driver release. ATI needs to do something similar, regardless of its intentions to make it all just work and to have the user never need to edit a config file. I can setup multiple separate and/or independant displays in xorg.conf pretty easily, but I couldn't figure out how to properly do the same with aticonfig or CCCLE. The difference being available, almost-entirely-understandable-by-mere-mortals documentation.
Thanks again for your patience and thick skin Matthew(I haven't been trying to get an ATI card working today so you probably don't need either to read this message ), not to mention just being here and working to help us try to understand ATI's peculiar(imnsho) choices.